title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Abdal_3DAvatarGAN_Bridging_Domains_for_Personalized_Editable_Avatars_CVPR_2023
|
Abstract Modern 3D-GANs synthesize geometry and texture by training on large-scale datasets with a consistent structure. Training such models on stylized, artistic data, with often unknown, highly variable geometry, and camera informa-tion has not yet been shown possible. Can we train a 3D GAN on such artistic data, while maintaining multi-view consistency and texture quality? To this end, we propose an adaptation framework, where the source domain is a pre-trained 3D-GAN, while the target domain is a 2D-GAN trained on artistic datasets. We, then, distill the knowl-edge from a 2D generator to the source 3D generator. To do that, we first propose an optimization-based method to align the distributions of camera parameters across do-mains. Second, we propose regularizations necessary to learn high-quality texture, while avoiding degenerate ge-ometric solutions, such as flat shapes. Third, we showa deformation-based technique for modeling exaggerated geometry of artistic domains, enablingÐas a byproductÐ personalized geometric editing. Finally, we propose a novel inversion method for 3D-GANs linking the latent spaces of the source and the target domains. Our contributionsÐfor the first timeÐallow for the generation, editing, and anima-tion of personalized artistic 3D avatars on artistic datasets. Project Page: https:/rameenabdal.github.io/3DAvatarGAN
|
1. Introduction Photo-realistic portrait face generation is an iconic ap-plication demonstrating the capability of generative models especially GANs [ 28,30,31]. A recent development has wit-nessed an advancement from straightforwardly synthesizing 2D images to learning 3D structures without 3D supervi-sion, referred to as 3D-GANs [ 10,41,55,64]. Such training ²Part of the work was done during an internship at Snap Inc. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4552 is feasible with the datasets containing objects with highly consistent geometry, enabling a 3D-GAN to learn a distri-bution of shapes and textures. In contrast, artistically styl-ized datasets [ 25,65] have arbitrary exaggerations of both geometry and texture, for example, the nose, cheeks, and eyes can be arbitrarily drawn, depending on the style of the artist as well as on the features of the subject, see Fig. 1. Training a 3D-GAN on such data becomes problematic due to the challenge of learning such an arbitrary distribution of geometry and texture. In our experiments (Sec. 5.1), 3D-GANs [ 10] generate flat geometry and become 2D-GANs essentially. A natural question arises, whether a 3D-GAN can synthesize consistent novel views of images belonging to artistically stylized domains, such as the ones in Fig. 1. In this work, we propose a domain-adaption framework that allows us to answer the question positively. Specifi-cally, we fine-tune a pre-trained 3D-GAN using a 2D-GAN trained on a target domain. Despite being well explored for 2D-GANs [ 25,65], existing domain adaptation techniques are not directly applicable to 3D-GANs, due to the nature of 3D data and characteristics of 3D generators. The geometry and texture of stylized 2D datasets can be arbitrarily exaggerated depending on the context, artist, and production requirements. Due to this, no reliable way to estimate camera parameters for each image exists, whether using an off-the-shelf pose detector [ 72] or a manual label-ing effort. To enable the training of 3D-GANs on such chal-lenging datasets, we propose three contributions. 1An optimization-based method to align distributions of cam-era parameters between domains. 2Texture, depth, and geometry regularizations to avoid degenerate, flat solutions and ensure high visual quality. Furthermore, we redesign the discriminator training to make it compatible with our task. We then propose 3aThin Plate Spline (TPS) 3D deformation module operating on a tri-plane representation to allow for certain large and sometimes extreme geometric deformations, which are so typical in artistic domains. The proposed adaptation framework enables the train-ing of 3D-GANs on complex and challenging artistic data. The previous success of domain adaptation in 2D-GANs un-leashed a number of exciting applications in the content cre-ation area [ 25,65]. Given a single image such methods first find a latent code corresponding to it using GAN inversion, followed by latent editing producing the desired effect in the image space. Compared to 2D-GANs, the latent space of 3D-GANs is more entangled, making it more challeng-ing to link the latent spaces between domains, rendering the existing inversion and editing techniques not directly appli-cable. Hence, we take a step further and explore the use of our approach to 3D artistic avatar generation and editing. Our final contribution to enable such applications is 4a new inversion method for coupled 3D-GANs. In summary, the proposed domain-adaption frameworkallows us to train 3D-GANs on challenging artistic datasets with exaggerated geometry and texture. We call our method 3DAvatarGAN as itÐfor the first timeÐoffers generation, editing, and animation of personalized stylized, artistic avatars obtained from a single image. Our results (See Sec. 5.2) show the high-quality 3D avatars possible by our method compared to the naive fine-tuning.
|
Bhunia_Person_Image_Synthesis_via_Denoising_Diffusion_Model_CVPR_2023
|
AbstractThe pose-guided person image generation task requiressynthesizing photorealistic images of humans in arbitraryposes. The existing approaches use generative adver-sarial networks that do not necessarily maintain realistictextures or need dense correspondences that struggle tohandle complex deformations and severe occlusions. Inthis work, we show how denoising diffusion models canbe applied for high-fidelity person image synthesis withstrong sample diversity and enhanced mode coverage ofthe learnt data distribution. Our proposed Person ImageDiffusion Model (PIDM) disintegrates the complex trans-fer problem into a series of simpler forward-backward de-noising steps. This helps in learning plausible source-to-target transformation trajectories that result in faithfultextures and undistorted appearance details. We intro-duce a ‘texture diffusion module’ based on cross-attentionto accurately model the correspondences between appear-ance and pose information available in source and targetimages. Further, we propose ‘disentangled classifier-freeguidance’ to ensure close resemblance between the condi-tional inputs and the synthesized output in terms of bothpose and appearance information. Our extensive resultson two large-scale benchmarks and a user study demon-strate the photorealism of our proposed approach underchallenging scenarios. We also show how our generatedimages can help in downstream tasks. Code is available athttps://github.com/ankanbhunia/PIDM.1. IntroductionThe Pose-guided person image synthesis task [19,23,30]aims to render a person’s image with a desired pose and ap-pearance. Specifically, the appearance is defined by a givensource image and the pose by a set of keypoints. Havingcontrol over the synthesized person images in terms of poseand style is an important requisite for applications such as e-commerce, virtual reality, metaverse and content generationfor the entertainment industry. Furthermore, the generated PoseStyle𝒚𝑇~𝒩(0,𝐈)Output𝑇Diffusion steps NTEDOursNTEDOurs GTGT(a) (b)InputsInputsFigure 1.(a)Our proposed PIDM is a denoising diffusion modelwhere the generative path is conditioned on the pose and style.PIDM breaks down the problem into a series of forward-backwarddiffusion steps to learn the plausible transfer trajectories.(b)Com-parison of PIDM with the recently introduced NTED [18]. PIDMaccurately retains the appearance of the source style image whilealso producing images that are more natural and sharper whileNTED struggles to adequately preserve the source appearance incomplex scenarios (marked in red boxes).images can be used to improve performance on downstreamtasks such as person re-identification [30]. The challenge isto generate photorealistic outputs tightly conforming withthe given pose and appearance information.In the literature, person synthesis problem is generallytackled using Generative Adversarial Networks (GAN) [4]which try to generate a person in a desired pose using asingle forward pass. However, preserving a coherent struc-tural, appearance and global body composition in the newpose is a challenging task to achieve in one shot. The result-ing outputs commonly experience deformed textures andunrealistic body shapes, especially when synthesizing oc-cluded body parts (see Fig.1). Further, GANs are proneto unstable training behaviour due to adversarial min-maxobjective and lead to limited diversity in the generated sam-ples. Similarly, Variational Autoencoder [8] based solutions This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5968 have been explored that are relatively stable, but suffer fromblurry details and offer low-quality outputs than GANs dueto their dependence on a surrogate loss for optimization.In this work, we frame the person synthesis problem asa series of diffusion steps that progressively transfer a per-son in the source image to the target pose. Diffusion mod-els [6] are motivated from non-equilibrium thermodynam-ics that define a Markov chain of slowly adding noise to theinput samples (forward pass) and then reconstructing thedesired samples from noise (reverse pass). In this manner,rather than modeling the complex transfer characteristics ina single go, our proposed person synthesis approach PIDMbreaks down the problem into a series of forward-backwarddiffusion steps to learn the plausible transfer trajectories.Our approach can model the intricate interplay of the per-son’s pose and appearance, offers higher diversity and leadsto photorealistic results without texture deformations (seeFig.1). In contrast to existing approaches that deal withmajor pose shifts by requiring parser maps denoting humanbody parts [14,23,28], or dense 3D correspondences [9,10]to fit human body topology by warping the textures, our ap-proach can learn to generate realistic and authentic imageswithout such detailed annotations.Our major contributions are as follows:•We develop the firstdiffusion-based approachforpose-guided person synthesis task which can work un-der challenging pose transformations while preservingappearance, texture and global shape characteristics.•To effectively model the complex interplay betweenappearance and pose information, we propose atexturediffusion module. This module exploits the correspon-dences between source and target appearance and posedetails, hereby obtaining artefact free images.•In the sampling procedure, we introducedisentangledclassifier-free guidanceto tightly align the output im-age style and pose with the source image appearanceand target pose, respectively. It ensures close resem-blance between the conditions that are input to the gen-erative model and the generated output.•Our results on DeepFashion [11] and Market-1501 [27] benchmarks set new state of the art. We alsoreport a user study to evaluate the qualitative featuresof generated images. Finally, we demonstrate that syn-thesized images can be used to improve performancein downstream taskse.g., person re-identification.2. Related WorkPose-guided Person Image Synthesis:The problem ofhuman pose transfer has been studied extensively during therecent years, especially with the unprecedented success ofGAN-based models [15] for conditional image synthesis.An early attempt [13] proposes a coarse-to-fine approachto first generate a rough image with the target pose and thenrefine the results adversarially. The method simply concate-nates the source image, the source pose, and the target poseas inputs to obtain the target image, which leads to featuremisalignment. To address this issue, Essneret al.[3] at-tempt to disentangle the appearance and the pose of personimages using V AE-based design and a UNet based skip-connection architecture. Siarohinet al.[20] improve themodel by introducing deformable skip connections to spa-tially transform the textures, which decomposes the overalldeformation by a set of local affine transformations. Subse-quently, some works [9,10,19] use flow-based deformationto transform the source information to improve pose align-ment. Renet al.[19] propose GFLA that obtains the globalflow fields and occlusion mask, which are used to warp lo-cal patches of the source image to match the required pose.Another group of works [9,10] use geometric models thatfit a 3D mesh human model onto the 2D image, and subse-quently predict the 3D flow, which finally warps the sourceappearance. On the other hand, without any deformationoperation, Zhuet al.[30] propose to progressively trans-form the source image by a sequence of transfer blocks.However, useful information can be lost during multipletransfers, which may result in blurry details. ADGAN [14]uses a texture encoder to extract style vectors for humanbody parts and gives them to several AdaIN residual blocksto synthesize the final image. Methods such as PISE [23],SPGnet [12] and CASD [28] make use of parsing mapsto generate the final image. CoCosNet [25,29] extractsdense correspondences between cross-domain images withattention-based operation. Recently, Renet al.[18] proposea framework NTED based on neural texture extraction anddistribution operation, which achieves superior results.Diffusion Models:The existing GAN-based approachesattempt to directly transfer the style of the source image to agiven target pose, which requires the architecture to modelcomplex transformation of pose. In this work, we presenta diffusion-based framework named PIDM that breaks thepose transformation process into several conditional denois-ing diffusion steps, in which each step is relatively simple tomodel. Diffusion models [6] are recently proposed genera-tive models that can synthesize high-quality images. Aftersuccess in unconditional generation, these models are ex-tended to work in conditional generation settings, demon-strating competitive or even better performance than GANs.For class-conditioned generation, Dhariwalet al.[2] intro-duce classifier-guided diffusion, which is later adapted byGLIDE [16] to enable conditioning over CLIP textual rep-resentations. Recently, Hoet al.[7] propose a Classifier-Free Guidance approach that enables conditioning with-out requiring pretraining of the classifiers. In this work,we develop the firstdiffusion-basedapproach for pose-guided person synthesis task. We also introducedisentan-5969 gled classifier-free guidanceto tightly align the output im-age style and pose with the source image appearance andtarget pose, respectively.3. Proposed MethodMotivation:The existing pose-guided person synthesismethods [12,14,18,19,23,25,28] rely on GAN-based frame-works where the model attempts to directly transfer the styleof the source image into a given target pose in a single for-ward pass. It is quite challenging to directly capture thecomplex structure of the spatial transformation, thereforecurrent CNN-based architectures often struggle to transferthe intricate details of the cloth texture patterns. As a re-sult, the existing methods yield noticeable artifacts, whichbecome more evident when the generator needs to infer oc-cluded body regions from the given source image.Motivated by these observations, we advocate that in-stead of learning the complex structure directly in a singlestep, deriving the final image using successive intermediatetransfer steps can make the learning task simpler. To en-able the above progressive transformation scheme, we intro-ducePerson Image Diffusion Modelor PIDM, a diffusion-based [6] person image synthesis framework that breaksdown the generation process into several conditional de-noising diffusion steps, each step being relatively simple tomodel. A single step in the diffusion process can be ap-proximated by a simple isotropic Gaussian distribution. Weobserve that our diffusion-based texture transfer techniquePIDM can bring the following benefits:(1)High-qualitysynthesis: Compared to previous GAN-based approaches,PIDM generates photo-realistic results when dealing withcomplex cloth-textures and extreme pose angles.(2)Sta-ble training:Existing GAN-based approaches use multipleloss objectives alongside adversarial losses, which are oftendifficult to balance, resulting in unstable training. In con-trast, PIDM exhibits better training stability and mode cov-erage. Also, our model is less prone to hyperparameters.(3)Meaningful interpolation
|
:Our proposed PIDM allows us toachieve smooth and consistent linear interpolation in the la-tent space.(4)Flexibility:The models from existing workare usually task dependent, requiring different models forvarious tasks (e.g., separate models for unconditional, pose-conditional, pose and style-conditional generation tasks). Incontrast, in our case, a single model can be used to performmultiple tasks. Furthermore, PIDM inherits the flexibilityand controllability of diffusion models that enable variousdownstream tasks (e.g., appearance control, see Fig.7) us-ing our model.Overall Framework:Fig.2shows the overview of theproposed generative model. Given a source imagexsanda target posexp, our goal is to train a conditional diffusionmodelp✓(y|xs,xp)where the final output imageyshouldnot only satisfy the target pose matching requirement, butshould also have the same style as inxs.The denoising network✏✓in PIDM is a UNet-based de-sign composed of a noise prediction moduleHNand a tex-ture encoderHE. The encoderHEencodes the texture pat-terns of the source imagexs. To obtain multi-scale featureswe derive output from the different layers ofHEresultingin stacked feature representationFs=[f1,f2,. . . ,fm].T otransfer rich multi-scale texture patterns from the source im-age distribution to the noise prediction moduleHN, we pro-pose to use cross-attention basedTexture diffusion blocks(TDB) that are embedded in different layers ofHN. Thisallows the network to fully exploit the correspondences be-tween the source and target appearances, thus resulting indistortion-free images. During inference, to amplify theconditional signal ofxsandxpin the sampled images,we adapt the classifier-free guidance [7] in our samplingtechnique to achievedisentangled guidance. It not only im-proves the overall quality of the generation, but also ensuresaccurate transfer of texture patterns. We provide detailedanalysis of the proposed generative model in Sec.3.1, theTexture diffusion blocks in Sec.3.2and our disentangledguidance based sampling technique in Sec.3.3.3.1. Texture-Conditioned Diffusion ModelThe generative modeling scheme of PIDM is based onthe Denoising diffusion probabilistic model [6] (DDPM).The general idea of DDPM is to design a diffusion processthat gradually adds noise to the data sampled from the tar-get distributiony0⇠q(y0), while the backward denois-ing process attempts to learn the reverse mapping. The de-noising diffusion process eventually converts an isotropicGaussian noiseyT⇠N(0,I)into the target data distribu-tion inTsteps. Essentially, this scheme divides a complexdistribution-modeling problem into a set of simple denois-ing problems. The forward diffusion path of DDPM is aMarkov chain with the following conditional distribution:q(yt|yt 1)=N(yt;p1 tyt 1, tI).(1)wheret⇠[1,T]and 1, 2,. . . , Tis a fixed varianceschedule with t2(0,1). Using the notation↵t=1 tand¯↵t=Qti=1↵i, we can sample fromq(yt|y0)in aclosed form at an arbitrary timestept:yt=p¯↵ty0+p1 ¯↵t✏, where✏⇠N(0,I). The true posteriorq(yt 1|yt)can be approximated by a deep neural networkto predict the mean and variance ofyt 1with the followingparameterization,p✓(yt 1|yt,xp,xs)=N(yt 1;µ✓(yt,t ,xp,xs),⌃✓(yt,t ,xp,xs)).(2)Noise prediction moduleHN:Instead of directly derivingµ✓following [6], we predict the noise✏✓(yt,t ,xp,xs)us-ing our noise prediction moduleHN. The noisy imageyt 5970 , 𝝐uncond 𝝐𝜽(𝒚𝑡,𝑡,𝒙𝑝,𝒙𝑠)𝒚𝑡𝒚𝑡−1𝒚𝑡𝒙𝑝………… qkv-attentionquerykeyvalue…𝒙𝑠 𝒙𝑠𝒙𝑝 𝑡Texture Diffusion block : Texture Diffusion block (TDB)𝝐𝜽…: Predicted noise𝒙𝑠&𝒙𝑝: source image & target posesampling𝑤s𝑤p 𝒚𝑇(a) Overview of our proposed PIDM architecture(b) Disentangled Guidance based DDPM sampling𝝐pose𝝐styleℋ𝐸𝑭ℎ𝑙𝑭𝑜𝑙𝑭𝑠ℋ𝑁 Figure 2. (a) The proposed PIDM framework is a UNet-based network composed of a noise prediction moduleHNand a texture encoderHE. The encoderHEencodes the texture patterns of the source imagexs. To obtain multi-scale features, we derive output from thedifferent layers ofHEresulting in a stacked feature representationFs. To transfer rich multi-scale texture patterns from the source imagedistribution to the noise prediction moduleHN, we propose to use cross-attention basedTexture diffusion blocks(TDB) that are embeddedin different layers ofHN. This allows the network to fully exploit the correspondences between source and target appearance, thusresulting in distortion-free images. (b) During inference, to amplify the conditional signal ofxsandxpin the sampled images, we adaptthe Classifier-free guidance [7] in our sampling technique to achievedisentangled guidance.is concatenated with the target posexpand passed throughHNto predict the noise.xpwill guide the denoising pro-cess and ensure that the intermediate noise representationsand the final image follow the given skeleton structure. Toinject the desired texture patterns into the noise predictorbranch, we provide the multiscale features of the textureencoderHEthroughTexture diffusion blocks(TDB). Totrain the denoising process, we first generate a noisy sampleyt⇠q(yt|y0)by adding Gaussian noise✏toy0, then traina conditional denoising model✏✓(yt,t ,xp,xs)to predictthe added noise using a standard MSE loss:Lmse=Et⇠[1,T],y0⇠q(y0),✏k✏ ✏✓(yt,t ,xp,xs)k2.(3)Nicholet al.[17] present an effective learning strategy asan improved version of DDPM with fewer steps needed andapplies an additional loss termLvibto learn the variance⌃✓.The overall hybrid objective that we adopt is as follows:Lhybrid=Lmse+Lvib.(4)3.2. Texture Diffusion Blocks (TDB)To mix the style of the source image within the noise pre-diction branch, we employ cross-attention based TDB unitsthat are embedded in different layers ofHN. LetFlhbethe noise features in layerlof the noise prediction branch.Given the multiscale texture featuresFsderived from theHEas input to TDB units, the attention module essentiallycomputes the region of interest with respect to each queryposition, which is important to subsequently denoise thegiven noisy sample in the direction of the desired texturepatterns. The keysKand valuesVare derived fromHEwhile queriesQare obtained from noise featuresFlh. Theattention operation is formulated as follows:Q= lq(Flh),K= lk(Fs),V= lv(Fs)Flatt=QKTpC,Flo=Wlsoftmax(Flatt)V+Flh,(5)where lq, lk, lvare layer-specific1⇥1convolution op-erators.Wlrefers to learnable weights to generate finalcross-attended featuresFlo. We adopt TDB for the featureat specific resolutions,i.e.,32⇥32,16⇥16, and8⇥8.3.3. Disentangled Guidance based SamplingOnce the model learns the conditional distribution, infer-ence is performed by first sampling a Gaussian noiseyT⇠N(0,I)and then sampling fromp✓(yt 1|yt,xp,xs), fromt=Ttot=1in an iterative manner. While the generatedimages using the vanilla sampling technique look photo-realistic, they often do not strongly correlate with the con-ditional source image and target pose input. To amplify theeffect of the conditioning signalxsandxpin the sampledimages, we adapt Classifier-free guidance [7] in our multi-conditional sampling procedure. We observe that in order tosample images that not only fulfil the style requirement, butalso ensure perfect alignment with the target pose input, itis important to employdisentangled guidancewith respectto both style and pose. To enable disentangled guidance, weuse the following equation:✏cond=✏uncond+wp✏pose+ws✏style,(6)where✏uncond=✏✓(yt,t ,;,;)is the unconditioned pre-diction of the model, where we replace both conditionswith the all-zeros tensor;. The pose-guided predic-tion and the style-guided prediction are respectively rep-resented by✏pose=✏✓(yt,t ,xp,;) ✏uncondand✏style= 5971 Figure 3. Qualitative comparisons with several state-of-the-art models on the DeepFashion dataset. The inputs to the model are the targetposexpand the source imagexs. From left-to-right the results are of ADGAN [14], PISE [23], GFLA [19], DPTN [24], CASD [28],NTED [18] and ours respectively.(figure best viewed in zoom)✏✓(yt,t ,;,xs) ✏uncond.wpandwsare guidance scalecorresponding to pose and style. In practice, the diffu-sion model learns both conditioned and unconditioned dis-tributions during training by randomly setting conditionalvariablesxpandxs=;for⌘% of the samples, so that✏✓(yt,t ,;,;)approximatesp(y0)more faithfully.4. ExperimentsDatasets:We carry out experiments on DeepFashionIn-shop Clothes Retrieval Benchmark [11] and Market-1501 [27] dataset. DeepFashion contains 52,712 high-resolution images of fashion models. Following the samedata configuration in [30], we split this dataset into trainingand testing subsets with 101,966 and 8,570 pairs, respec-tively. Skeletons are extracted by OpenPose [1]. Market-1501 contains 32,668 low-resolution images. The imagesvary in terms of the viewpoints, background, illumination,etc. For both datasets, personal identities of the training andtesting sets do not overlap.Evaluation Metrics:We evaluate the model using threedifferent metrics.Structure Similarity Index Measure(SSIM) [22] andLearned Perceptual Image Patch Simi-larity(LPIPS) [26] are used to quantify the reconstructionaccuracy. SSIM calculates the pixel-level image similar-ity, while LPIPS computes the distance between the gen-erated images and reference images at the perceptual do-main.Fr`echet Inception Distance(FID) [5] is used to mea-sure the realism of the generated images. It calculates theWasserstein-2 distance between distributions of the gener-ated images and the ground-truth images.Implementation Details:Our PIDM model has beentrained withT= 1000noising steps and a linear noiseschedule. During training, we adopt an exponential mov-5972 Table 1. Quantitative Comparison of the proposed PIDM with sev-eral state-of-the-art models in terms ofFr`echet Inception Distance(FID),Structure Similarity Index Measure(SSIM) andLearnedPerceptual Image Patch Similarity(LPIPS). The results are shownon both256⇥176and512⇥352resolution for DeepFashion and128⇥64resolution for Market-1501 dataset.DatasetMethodsFID(#)SSIM(")LPIPS(#)DeepFashion [11](256⇥176)Def-GAN [20]18.4570.67860.2330PATN [30]20.7510.67090.2562ADGAN [14]14.4580.67210.2283PISE [23]13.6100.66290.2059GFLA [19]10.5730.70740.2341DPTN [24]11.3870.71120.1931CASD [28]11.3730.72480.1936NTED [18]8.68380.71820.1752PIDM (Ours)6.36710.73120.1678DeepFashion [11](512⇥352)CocosNet2 [29]13.3250.72360.2265NTED [18]7.78210.73760.1980PIDM (Ours)5.83650.74190.1768Market-1501 [27](128⇥64)Def-GAN [20]25.3640.26830.2994PTN [30]22.6570.28210.3196GFLA [19]19.7510.28830.2817DPTN [24]18.9950.28540.2711PIDM (Ours)14.4510.30540.2415ing average (EMA) of the denoising network weights with0.9999 decay. In all experiments, we use a batch size of 8.Adam optimizer is used with learning rate set to2e 5. Fordisentangled guidance, we use⌘=1 0. For sampling, thevalues ofwpandwsare set to2.0. For the DeepFashiondataset, we train our model using256⇥176and512⇥352images. For Market-1501, we use128⇥64images.4.1. Quantitative and Qualitative ComparisonsWe quantitatively compare (Tab.1) our proposed PIDMwith several state-of-the-art methods, including Def-GAN[20], PATN [30], ADGAN [14], PISE [23], GFLA [19],DPTN [24], CASD [28], CocosNet2 [29] and NTED [18].The experiments are done on both256⇥176and512⇥352resolution for DeepFashion and128⇥64resolution forMarket-1501 dataset. Tab.1shows that our model achievesthe best FID score indicating that our model can gener-ate higher-quality images compared to the previous ap-proaches. Furthermore, PIDM performs fa
|
Binder_Shortcomings_of_Top-Down_Randomization-Based_Sanity_Checks_for_Evaluations_of_Deep_CVPR_2023
|
Abstract While the evaluation of explanations is an important step towards trustworthy models, it needs to be done carefully, and the employed metrics need to be well-understood. Specif-ically model randomization testing can be overinterpreted if regarded as a primary criterion for selecting or discard-ing explanation methods. To address shortcomings of this test, we start by observing an experimental gap in the rank-ing of explanation methods between randomization-based sanity checks [1] and model output faithfulness measures (e.g. [20]). We identify limitations of model-randomization-based sanity checks for the purpose of evaluating explana-tions. Firstly, we show that uninformative attribution maps created with zero pixel-wise covariance easily achieve high scores in this type of checks. Secondly, we show that top-down model randomization preserves scales of forward pass activations with high probability. That is, channels with large activations have a high probility to contribute strongly to the output, even after randomization of the network on top of them. Hence, explanations after randomization can only be expected to differ to a certain extent. This explains the observed experimental gap. In summary, these results demonstrate the inadequacy of model-randomization-based sanity checks as a criterion to rank attribution methods.
|
1. Introduction Parallel to the progressively astounding performances of machine learning techniques, especially deep learning methods, in solving even the most complex tasks, the trans-parency, trustworthiness, and lack of interpretability of these techniques has increasingly been called into ques-tion [11, 14, 15]. As potential solutions to these issues, a vast number of XAI methods have been developed in re-cent years [21], that aim to explain a model’s behavior, for instance, by (locally) attributing importance scores to fea-tures of singular input samples, indicating how (much) thesefeatures influence a specific model decision [6, 22, 25, 27]. However, the scores obtained for different attribution map methods tend to differ significantly, and the question arises how well each explains model decisions. This is generally not answered easily, as there are a number of desirable prop-erties proposed to be fulfilled by these attributions, such as localization on relevant objects [4, 5, 30] or faithfulness to the model output [2, 8, 20], among others, with several quantitative tests having been proposed for each. In parallel to these empirical evaluations, several works have proposed that explanations should fulfill a certain num-ber of ‘axioms’ or ‘unit tests’ [1, 12, 16, 27], which need to hold universally for a method to be considered good or valid. We place our focus on the model-randomization-based sanity checks [1], which state that the explanation should be sensi-tive to a random permutation of parameters at one or more layers in the network. Specifically, the authors proposed to apply measures such as Structural Similarity Index Measure (SSIM) [28] between attribution maps obtained from the original model and a derived model for which the top-layers are randomized. The idea is to require that methods used to compute attribution maps should exhibit a large change when the neural network model — i.e., its defining/learned parameter set — is randomized from the top. The authors of [1,23] suggested to discard attribution map methods which perform poorly under this test — i.e., have a high SSIM mea-sure between attributions obtained with the original and the randomized model — under the assumption that those XAI methods are not affected by the model’s learned parameters. However, we observe a significant experimental gap be-tween top-down randomization checks when used as an eval-uation measure, and occlusion-based evaluations of model faithfulness such as region perturbation [20]. Concretely, Guided Backpropagation (GB) [25] and Layer-wise Rel-evance Propagation (LRP) [6] exhibit low randomization scores under the first type of measure and yet clearly out-perform several gradient-based methods in occlusion-based evaluations. We are interested to resolve this discrepancy. We identify two shortcomings of top-down randomiza-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16143 tion checks when used as a measure of explanation quality. Firstly, we show that uninformative attribution maps created with zero pixel-wise covariance — e.g., attribution maps gen-erated from random noise — easily achieve high scores in top-down randomization checks. Effectively, this makes top-down randomization checks favor attribution maps which are affected by gradient shattering noise [7]. Secondly, we argue that the randomization-based sanity checks may always reward explanations that change under randomization, even when such randomizations do not affect the output of the model (and its invariances) significantly. Such invariance to randomization may result, e.g., from the presence of skip connections in the model, but also due to the fact that randomization may be insufficient to strongly alter the spatial distribution of activations in adjacent layers, something that we explain by the multiplicity and redun-dancy of positive activation paths between adjacent layers in ReLU networks. In setups which optimize parameters of attribution methods while measuring top-down randomiza-tion this might lead to the selection of explainers with higher noise. Along with our contributed theoretical insights and sup-porting experiments, the present note warns against an unre-flected use of model-randomization-based sanity checks as a sole criterion for selecting or dismissing a particular attri-bution technique, and proposes several directions to enable a more precise and informative use of randomization-based sanity checks for assessing how XAI performs on practical ML models. 1.1. Related work Evaluating Attributions. Comparing different attribution methods qualitatively is not sufficiently objective, and for that reason, a vast number of quantitative tests have been proposed in the past in order to measure explanation quality, focusing on different desirable properties of attributions. Complexity tests [8, 9, 18] advocate for sparse and easily understandable explanations, while robustness tests [3, 8, 17] measure how much attributions change between similar samples or with slight perturbations to the input. Under the assumption of an available ground truth explanation (e.g., a segmentation mask localizing the object(s) of interest), localization tests [4, 5, 30] ask for attributed values to be concentrated on this ground truth area. Faithfulness tests [3, 8, 20] compare the effect of perturbing certain input features on the model’s prediction to the values attributed to those features, so that optimally perturbing the features with the largest attribution values also affects the model prediction the most. Model randomization tests [1], which are the main focus of this work, progressively randomize the model, stating that attributions should change significantly with ongoing randomization.Caveats of Model Randomization Tests. The authors of [1] find that a large number of attribution methods seems to be invariant to model parameters, as their explanations do not change significantly under cascading model randomiza-tion. However, various aspects of these sanity checks have recently been called into question: For instance, these tests were performed on unsigned attributions. Specifically for Integrated Gradients (IG) [27], [26] show that if the signed attributions are tested instead, this method suddenly passes cascading model randomization instead of failing. This indi-cates that some of the results obtained in [1] for attribution methods where the sign carries meaning may be skewed due to the employed preprocessing. Furthermore, [29] argue for the distribution-dependence of model-randomization based sanity checks. The authors demonstrate that some methods seem to fail the sanity checks in [1] due to the choice of task, rather than invariance to model parameters. A similar observation is made by [13], who find that the same attri-bution methods can perform very differently under model randomization sanity checks when the model and task are varied. Note that the underlying assumption of [1] — that “good” attribution methods should be sensitive to model pa-rameters — is not called into question here. Rather, we posit that methods can fail the model randomization sanity checks for other reasons than invariance to model parameters.
|
Bokhovkin_Neural_Part_Priors_Learning_To_Optimize_Part-Based_Object_Completion_in_CVPR_2023
|
Abstract 3D scene understanding has seen significant advances in recent years, but has largely focused on object understanding in 3D scenes with independent per-object predictions. We thus propose to learn Neural Part Priors (NPPs), parametric spaces of objects and their parts, that enable optimizing to fit to a new input 3D scan with global scene consistency con-straints. The rich structure of our NPPs enables accurate, holistic scene reconstruction across similar objects in the scene. Both objects and their part geometries are charac-terized by coordinate field MLPs, facilitating optimization at test time to fit to input geometric observations as well as similar objects in the input scan. This enables more accu-rate reconstructions than independent per-object predictions as a single forward pass, while establishing global consis-tency within a scene. Experiments on the ScanNet dataset demonstrate that NPPs significantly outperforms the state-of-the-art in part decomposition and object completion in real-world scenes.1. Introduction With the introduction of commodity RGB-D sensors (e.g., Microsoft Kinect, Intel RealSense, etc.), remarkable progress has been made in reconstruction and tracking to construct 3D models of real-world environments [9, 14, 37, 39, 52]. This has enabled construction of large-scale datasets of real-world 3D scanned environments [4, 11], enabling significant advances in 3D semantic segmentation [10, 13, 23], 3D se-mantic instance segmentation [18, 24, 26], and even part-level understanding of scenes [3]. The achieved 3D object recognition has shown impressive advances, but methods fo-cus on independent predictions per object in single forward passes, resulting in semantic predictions that are inconsistent between repeated objects in a scene, and/or geometric predic-tions that do not precisely match input observed geometry. Simultaneously, recent advances in representing 3D shapes as continuous implicit functions represented with coordinate field MLPs have shown high-fidelity shape re-construction [6 –8, 40, 47]. Such methods have focused on object-level reconstructions, whereas part-based understand-ing is fundamental to many higher-level scene understanding tasks (e.g., interactions often occur with object parts – sitting on the seat of a couch, opening a door with a handle, etc.). This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9032 We thus propose to learn Neural Part Priors (NPPs), op-timizable parametric shape and part spaces learned from synthetic data. These learned manifolds enable efficient traversal in latent spaces during inference to fit precisely to objects in real-world scanned scenes, while maintaining consistent part decompositions with similar objects in the scene. Our NPPs leverage the representation power of neural implicit functions encoded as coordinate-field MLPs, repre-senting both shape and part geometries of objects. A shape can then be represented by a set of latent codes for each of its parts, where each code decodes to predict the respective part segmentation and signed distance field representation of the part geometry. This representation enables effective test-time joint optimization over all parts of a shape by traversing through the part latent space to find the set of parts that best explain a real-world shape observation. Furthermore, as re-peated objects often appear in a scene under different partial observation patterns (resulting in inconsistent predictions when made independently for each object) we further opti-mize for part consistency between similar objects detected in a scene to produce scene-consistent part decompositions. This allows us to reconstruct the holistic structure of a scene. To fit real-world 3D scan data, we first perform object detection and estimate the part types for each detected object. We can then optimize in test time jointly over the part codes for each shape to fit the observed scan geometry; we leverage a predicted part segmentation of the detected object and optimize jointly across the parts of each shape such that each part matches the segmentation, and their union fits the object. This joint optimization across parts produces a high-resolution part decomposition whose union represents the complete shape while fitting precisely to the observed real geometry. Furthermore, this optimization at inference time allows leveraging global scene information to inform our optimized part decompositions; in particular, we consider objects of the same predicted class with similar estimated geometry, and optimize them jointly, enabling more robust and scene-consistent part decompositions. In summary, we present the following contributions: •We propose to learn optimizable part-based latent priors for 3D shapes – Neural Part Priors, encoding part segmen-tation and part geometry into a latent space for the part types of each class category. •Our learned, optimizable part priors enable test-time opti-mization over the latent spaces, enhanced with inter-shape part-based constraints , to fit partial, cluttered object ge-ometry in real-world scanned scenes, resulting in robust and precise semantic part completion. •We additionally propose a scene-consistent optimization, enhanced with intra-shape constraints , jointly optimizing over similar objects that provides globally-consistent part decompositions for repeated object instances in a scene.2. Related Works 3D Object Detection and Instance Segmentation . 3D semantic scene understanding has seen rapid progress in recent years, with large-scale 3D datasets [4, 11, 19] and developments in 3D deep learning showing significant ad-vances in object-level understanding of 3D scenes. Various methods explore learning on different 3D representations for object detection and instance segmentation, including volu-metric grids [26, 49], point clouds [30, 38, 42, 43, 55], sparse voxel representations [18,24], and hybrid approaches [26,42]. These approaches have achieved impressive performance in detecting and segmenting objects in real-world observations of 3D scenes, but do not consider lower-level object part information that is requisite for many vision and robotics tasks, that involve object interaction and manipulation. Recently, Bokhovkin et al. [3] proposed an approach to es-timate part decompositions of objects in RGB-D scans, lever-aging structural part type prediction and a pre-computed set of geometric part priors. Due to the use of dense volumetric part priors, the part reasoning is limited to coarse resolutions, and does not precisely match the input observed geometry. We also address the task of semantic part prediction and completion for objects in real-world 3D scans, but leverage a learned, structured latent space representing neural part priors, enabling part reasoning at high resolutions which optimizing to fit accurately to the observed scan geometry. 3D Scan Completion . As real-world 3D reconstructions are very often incomplete due to the complexity of the scene geometry and occlusions, various approaches have been de-veloped to predict complete shape or scene geometry from partial observations. Earlier works focused on voxel-based scan completion for shapes [16, 54], with more recent works tackling the challenge of generating complete geometry from partial observations of large-scale scenes [12, 15, 17, 48], but without considering individual object instances. Several recent works propose to detect objects in an RGB-D scan and estimate the complete object geometries, leveraging voxel [3, 27] or point [38, 58] representations. Our approach to predicting part decompositions of objects inherently pro-vides object completion as the predicted parts union; in contrast to previous approaches estimating object comple-tion in RGB-D scans, we propose to characterize object parts as learned implicit priors, enabling test-time traversal of the latent space to fit accurately to observed input scan geometry. Part Segmentation of 3D Shapes . Part segmentation for 3D shapes has been well-studied in shape analysis, typically focusing on understanding collections of synthetic shapes. Various methods have been developed for unsupervised part segmentation by finding a consistent segmentation across a set of shapes [22, 28, 29, 33, 46]. Recent deep learning based approaches have leveraged datasets of shapes with part anno-9033 Figure 2. Method overview. From an input scan, we first detect 3D bounding boxes for objects. For each object, we predict their semantic part structure as a set of part labels and latent codes for each part. These latent codes map into the space of neural part priors, along with a full shape code used to regularize the shape structure. We then refine these codes at test time by optimizing to fit to the observed input geometry along with inter-object consistency between similar detected objects, producing effective part decompositions reflecting complete objects with scene consistency. tations to learn part segmentation on new shapes [25, 31, 57]. In particular, approaches that learn part sequences and hi-erarchies to capture part structures have shown effective part segmentation for shapes [35, 36, 50, 51, 53, 56]. These approaches target single-object scenarios, whereas we con-struct a set of learned part priors that can be optimized to fit to real-world, noisy, incomplete scan geometry. Neural Implicit Representations of 3D Shapes . Recently, we have seen significant advances in generative shape mod-eling with learned neural implicit representations that can represent continuous implicit surface representations, with-out ties to an explicit grid structure. Notably, DeepSDF [40] proposed an MLP-based network that predicts the SDF value for a given 3D location in space, conditioned on a latent shape code, which demonstrated effective modeling of 3D shapes while traversing the learned shape space. Such im-plicit representations have also been leveraged in hybrid approaches coupling explicit geometric locations with local implicit descriptions of geometry for shapes [20, 21] as well as scenes [41], without semantic meaning to the local decom-positions. We propose to leverage the representation power of such learned continuous implicit surfaces to characterize semantic object parts that can be jointly optimized together to fit all parts of an object to a partial scan observation. 3. Method 3.1. Overview We introduce Neural Part Priors (NPPs) to represent learned spaces of geometric object part priors, that enable joint part segmentation and completion of objects in real-world, incomplete RGB-D scans. From an input 3D scan S, we first detect objects O={oi}in the scan characterized by their bounding boxes and orientations, then for each object,we predict its part decomposition into a part class categories (with corresponding part latent codes) and their correspond-ing complete geometry represented as signed distance fields (SDFs) and trained on part annotations for shapes. This en-ables holistic reasoning about each object in the scene and prediction of complete geometry in unobserved scan regions. Since captured real-world scene geometry contains signifi-cant incompleteness or noise, we model our geometric part priors based on complete, clean
|
synthetic object parts, repre-sented as a learned latent space over implicit part geometry functions. This enables test-time optimization over the latent space of parts to fit real geometry observations, enabling part-based object completion while precisely representing real object geometry. Rather than considering each object independently, we observe that repeated objects often occur in scenes under different partial observations, leading to in-consistent independent predictions; we thus jointly optimize across similar objects in a scene, where objects are consid-ered similar if they share the same class category and pre-dicted shape geometries are close by chamfer distance. This results in scene-consistent, high-fidelity characterizations of both object part semantics and complete object geometry. An overview of our approach is shown in Fig. 2. 3.2. Object Detection From input 3D scan S, we first detect objects in the scene, leveraging a state-of-the-art 3D object detection backbone from MLCVNet [55]. MLCVNet interprets Sas a point cloud and proposes objects through voting [43] at multiple resolutions, providing an output set of axis-aligned bounding boxes for each detected object oi. We extract the truncated signed distance field Difor each oiat4mm resolution to use for test-time optimization. We then aim to characterize shape properties for oito be used for rotation estimation and 9034 test-time optimization, and interpret Dias a323occupancy grid which is input to a 3D convolutional object encoder to produce the object’s shape descriptor si∈R256. Initial Rotation Estimation . From si, we use a 2-layer MLP to additionally predict an initial rotation estimate of the ob-ject as rinit iaround the up (gravity) vector of S. We note that the up vectors of an RGB-D scan can be reliably estimated with IMU and/or floor estimation techniques [11]. The rota-tion estimation is treated as a classification problem across nr= 12 bins of discretized angles ( {0◦,30◦, . . . , 330◦}), using a cross entropy loss. We use the estimated rotation rinit ito resample Dito approximate the canonical object ori-entation, from which we use to optimize for the final rotation riand the object part latent codes. 3.3. Learned Space of Neural Part Priors We first learn a set of latent part spaces for each class category, where each part space represents all part types for the particular object category. To this end, we employ a function fpcharacterized as an MLP to predict the implicit signed distance representation for each part geometry of the class category. In addition to the latent part space, we additionally train a proxy shape function fsas an MLP that learns full shape geometry as implicit signed distances, which will serve as additional regularization during the part optimization. Both fpandfsare trained in auto-decoder fashion following DeepSDF [40]. Then each train shape part is embedded into a part latent space by optimizing for its codezp k∈R256such that fpconditioned on this code and the part type maps a positional encoding xpos∈R63of a pointx∈R3in the canonical space to SDF value dof the part geometry: fp:R63×R256×ZNc 2→R, f p(xpos,zp k,1part) =d. (1) where 1part∈ZNc 2is a one-hot encoding of the part type for a maximum of Ncparts. Similar to NeRF [34], euclidean co-ordinates x∈R3are encoded using sin/cosfunctions with 10 frequencies [20, ...,29]. The shape space is trained analo-gously for each class category where zs i∈R256represents a shape latent code in the space: fs:R63×R256→R, f p(xpos,zs i) =d. (2) We train latent spaces of part and shape priors on the synthetic PartNet [36] dataset to characterize a space of complete parts and shapes, by minimizing the reconstruction error over all train shape parts, while optimizing for latent codes{zp k}and weights of fp. We use an ℓ1reconstruction loss with ℓ2regularization on the latent codes: L=NpX j=1|fp(xpos j,zp k,1part)−Dgt(xpos j)|1+λ||zp k||2 2(3) forNppoints near the surface, the regularization weight λ=1e-5. We train fsanalogously.3.4. Part Decompositions in Real Scenes Once we have learned our latent space of parts, we can tra-verse them at test time to find the part-based decomposition of an object that best fits to its real-world observed geome-try in a scene. Since real-world observations are typically incomplete, we can optimize for complete part decompo-sitions based on strong priors given by the trained latent spaces. This allows for effective regularization by synthetic part characteristics (clean, complete) while fitting precisely to real observed geometry. To guide this optimization for a detected object box owith its shape feature s, we predict its high-level decomposition into a set of semantic part types {(ck,pk)}, where pk∈ R256is a part feature descriptor and ckthe part class label. Theiobject suffix is discarded here for simplicity. The part optimization is initialized using {(ck,pk)}. To obtain the semantic part type predictions, we employ a message-passing graph neural network that infers part relations to predict the set of component part types. Similar to [35], from the shape feature swe use an MLP to predict at mostNc= 10 parts. For each potential part k, we predict its probability of existence, its part label ck, its feature vector pk, and probabilities of physical adjacency (given by face connectivity between parts) between each pair of parts to learn structural part information. This produces the semantic description of the set of parts for the object {(ck,pk)}, from the parts predicted with part existence probability >0.5. Projection to the Latent Part Space . We then learn a pro-jection mapping from the part features {pk}to the learned latent part space based on synthetic part priors, using a small MLP to predict {˜zp k}, as shown in Fig. 3(a). This helps to provide a close initial estimate in the latent part space in order to initialize optimization over these part codes to fit precisely to the observed object geometry. Similarly, we project the shape code sto the learned latent shape space with a small shape projection MLP to predict {˜zs}, which we use to help regularize the part code optimization. Both of these projection MLPs are trained using MSE losses against the optimized train codes of the latent spaces. Part Segmentation Estimation . In addition to our projection initialization, we estimate part segmentation {Dp}Nparts p=1 for the input object TSDF Dover the full vol-ume, representing part SDF geometry in the regions pre-dicted as corresponding to the part p, where part segmen-tation regions cover the entire shape, including unobserved regions. This is used to guide part geometry predictions when optimizing at test time to fit to real observed input geometry. For each point x∈R3which has distance < d trunc = 0.16m from the input object TSDF D, we classify it to one of the predicted parts {(ck,pk)}or back-ground using a small PointNet-based [44] network. This 9035 (a) Projection into part and shape spaces. (b) Joint optimization to fit to input scan. Figure 3. (a) Projection into the part and shape latent spaces along with part segmentation from input scan geometry. (b) Optimization at test time to fit to observed scan geometry while maintaining inter-part consistency within a shape and inter-shape consistency for geometrically similar objects. segmentation prediction takes as input the corresponding shape feature s, the initial estimated rotation rinit i, and the 3D coordinates of x, and is trained with a cross-entropy loss. 3.5. Joint Inter-and Intra-Shape Part Optimization To obtain the final part decompositions, we traverse over the learned latent part space to fit to the observed input scan geometry, as shown in Fig. 3(b). From the initial estimated part codes {˜zp k}and shape code {˜zs}, their decoded part SDFs should match to each of {Dp}Nparts p=1 . Since the part and shape latent spaces have been trained in the canonical shape space, we optimize for a refined rotation prediction rfrom rinitusing iterative closest points [2, 45] between the sampled points near Dand the initial shape estimate from projection ˜zs. We use Nisampled points near the observed input surface D(near being SDF values <0.025m) for rotation refinement, with Nthe number of points not predicted as background during part segmentation. While the predicted projected part and shape codes {˜zp k},{˜zs k}can provide a good initial estimate of the part de-composition of the complete shape, they represent synthetic part and shape priors that often do not fit the observed real input geometry. We thus optimize for part decompositions that best fit the input observations by minimizing the energy: L=X kLpart+Lshape+wconsLcons, (4) where Lpartdenotes the part reconstruction loss, Lshape a proxy shape reconstruction loss, Lconsa regularization to encourage global part consistency within the estimated shape, andwconsis a consistency weight. Lpartis anℓ1loss on part reconstruction: Lpart=NpartsX p=1X Npwtrunc|fp(zp k)−Tr(Dp)|+λ||zp k||2 2,(5) where Npis the number of points classified to part pand wtrunc gives a fixed greater weight for near-surface points(< dtrunc = 0.16m). Lshape is a proxy ℓ1loss on shape reconstruction: Lshape=X Nwtrunc|fs(zs)−Tr(D)|+λ||zs||2 2.(6) The regularization weight λ=1e-5 for both Eq. 5, 6. Finally, Lconsencourages all parts to reconstruct a shape similar to the optimized shape: Lcons=X N|fp(zp k)−fs(zs)|, (7) where fs(zs)is frozen for Lcons. This allows for recon-structed parts to join together smoothly without boundary artifacts to holistically reconstruct a shape. This produces a final optimized set of parts for each object in the scene, where parts both fit precisely to input geometry and represent the complete geometry of each part, even in unobserved regions. The final part geometries can be ex-tracted from the SDFs with Marching Cubes [32] to obtain a surface mesh representation of the semantic part completion. Scene-Consistent Optimization . Scenes often contain re-peated instances of objects which are observed from different views, thus frequently resulting in inconsistent part decompo-sitions when considered as independent objects. We propose a scene-consistent optimization between similar predicted objects, where objects in a scene are considered similar if their predicted class category is the same and the chamfer distance between their decoded shapes from ˜zs kis< τs. For a set of Nsimsimilar objects in a scene, we collect to-gether their predicted part segmentations and observed input SDF geometry in the canonical orientation based on Tr(Dp) to provide a holistic set of constraints across different partial observations to produce {Di}Nsim i=1. The{Di}Nsim i=1 are then aggregated to form D′by sampling a set of Navgpoints near the surfaces of {Di}Nsim i=1 where Navgis the average number of points across the Nsimobjects, and each point is assigned the minimum SDF value within its 30-point local neighbour-9036 Chamfer Distance – Accuracy ( ↓) Chamfer Distance – Completion ( ↓) Method chair table cab. bkshlf bed bin class avg inst avg chair table cab. bkshlf bed bin class avg inst avg SG-NN [12] + MLCVNet [55] + PointGroup [30] 0.047 0.110 0.146 0.173 0.350 0.051 0.146 0.083 0.054 0.141 0.123 0.192 0.382 0.045 0.156 0.089 MLCVNet [55] + StructureNet [35] 0.024 0.074 0.104 0.166 0.424 0.039 0.138 0.061 0.028 0.129 0.118 0.154 0.352 0.037 0.136 0.067 Bokhovkin et al. [3] 0.029 0.073
|
Huang_Adaptive_Assignment_for_Geometry_Aware_Local_Feature_Matching_CVPR_2023
|
Abstract The detector-free feature matching approaches are cur-rently attracting great attention thanks to their excellent performance. However, these methods still struggle at large-scale and viewpoint variations, due to the geomet-ric inconsistency resulting from the application of the mutual nearest neighbour criterion ( i.e., one-to-one as-signment) in patch-level matching. Accordingly, we in-troduce AdaMatcher, which first accomplishes the fea-ture correlation and co-visible area estimation through an elaborate feature interaction module, then performs adaptive assignment on patch-level matching while es-timating the scales between images, and finally refines the co-visible matches through scale alignment and sub-pixel regression module. Extensive experiments show that AdaMatcher outperforms solid baselines and achieves state-of-the-art results on many downstream tasks. Ad-ditionally, the adaptive assignment and sub-pixel refine-ment module can be used as a refinement network for other matching methods, such as SuperGlue, to boost their per-formance further. The code will be publicly available at https://github.com/AbyssGaze/AdaMatcher.
|
1. Introduction Establishing accurate correspondences for local features between image pairs is an essential basis for a broad range of computer vision tasks, including visual localization, structure from motion (SfM), simultaneous localization and mapping (SLAM), etc. However, achieving reliable and ac-curate feature matching is still challenging due to various factors such as scale changes, viewpoint diversification, il-lumination variations, repetitive patterns, and poor texture. Existing image matching pipelines are mainly divided *These authors contributed equally. †Corresponding author. Adaptive Assignment One-to-one Assignment𝑝𝐴𝑝𝐵 𝑝1𝐶 𝑝2𝐶 Patch centre Projected point𝑰𝑩 𝑰𝑪 𝑰𝑨 𝑰𝑪 𝑝𝐴𝑝𝐵 𝑝1𝐶 𝑝2𝐶Figure 1. An illustration of one-to-one assignment and adap-tive assignment. Under viewpoint changes or scale variations, one-to-one assignment leads to geometric inconsistency in patch-level feature matching, while adaptive assignment does not. For example, with one-to-one assignment, patch pair (pA, pC 2)is treated as a negative example, even though both pC 1andpC 2are projected into pAofIA. Such a matching rule is inconsistent with two-view and multi-view projective geometry. into two types: detector-based and detector-free. The for-mer is to build matches on detected and described sparse keypoints [8, 19, 20, 23, 26, 32]. However, as the detector-based matching pipeline relies on the reliability of key-point detectors and features description, it tends to per-form poorly under large viewpoint changes or scale vari-ations. For the latter, the detector-free matching pipeline can take full advantage of the rich context to establish corre-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5425 spondence between images end-to-end [6,13,24,25,29–31], without independent keypoint detection and feature descrip-tion steps. To achieve efficiency and accurate matching, the SOTA detector-free matching pipelines [6,11,13,29,30] use a coarse-to-fine structure, in which the patch-level matches are first obtained using the mutual nearest neighbor crite-rion, and then are refined to a sub-pixel level. Although these methods have improved considerably in performance, they still perform unsatisfactorily in extreme cases ( e.g., large viewpoint changes and scale changes). This is due to the fact that applying the mutual nearest neighbor criterion (ie, one-to-one correspondence) in patch-level matching leads to geometric inconsistencies and dif-ficulties in obtaining sufficient high-quality matches under large-scale or viewpoint variations. As shown in Fig.1, where IA, IB, ICare from the same scene, pC 1andpC 2of ICare both projected into pAofIA. However, when the mutual nearest neighbour criterion is applied in the train-ing process, the patch pair (pA, pC 1)is treated as a positive sample, while the patch pair (pA, pC 2)is treated as a neg-ative sample. The incorrect assignment leads to two-view geometric inconsistency. Deeply, from a multi-view per-spective, (pA, pB)and(pB, pC 2)are positive samples while (pA, pC 2)is a negative sample, which leads to multi-view geometric inconsistency between multiple image pairs. For inference, when there are large viewpoint changes or scale variations, one-to-one matching is difficult to obtain enough inliers to ensure accurate camera pose estimation. Further-more, when applied to multi-view-based downstream tasks (e.g., SfM and 3D reconstruction), one-to-one patch-level correspondences do not guarantee the consistency of multi-view matching, which is likely to make the mapping fail or the bundle adjustment difficult to converge. Inspired by the above consideration, we present AdaMatcher, a geometry aware local feature matching ap-proach, targeting at mitigating potential geometry mismatch between image pairs without scale-alignment preprocess-ing or viewpoint warping. Different from dual-softmax or optimal transport in [28, 29] which guarantees one-to-one correspondence, we allow adaptive assignment (including many-to-one and one-to-one) at patch-level matching dur-ing training and inference. When the scale or viewpoint changes significantly, the adaptive assignment can guaran-tee matching accuracy. The smooth scale transition from many-to-one matches between image pairs can be adopted to resolve scale inconsistencies. Furthermore, the structure of our delicately designed feature interaction module cou-ples co-visible feature decoding with cross-feature interac-tion, allowing the probability map of the co-visible region to be obtained later by a simple module to filter out matches outside co-visible areas. To summarize, we aim to provide several critical insights of matching local features across scales and viewpoints:• We propose a detector-free matching approach AdaMatcher that allows a patch-level adaptive assign-ment followed by a sub-pixel refinement to guaran-tee the establishment of geometry aware feature cor-respondences. • We introduce a novel feature interaction structure, which couples the co-visible feature decoding and cross-feature interaction. The probability map of the co-visible area can be obtained later by an additional module. • Extensive experiments and analysis demonstrate that AdaMatcher outperforms various strong baselines and achieves SOTA results for many downstream vision tasks.
|
Chen_Implicit_Neural_Head_Synthesis_via_Controllable_Local_Deformation_Fields_CVPR_2023
|
Abstract High-quality reconstruction of controllable 3D head avatars from 2D videos is highly desirable for virtual human applications in movies, games, and telepresence. Neural implicit fields provide a powerful representation to model 3D head avatars with personalized shape, expressions, and facial parts, e.g., hair and mouth interior, that go beyond the linear 3D morphable model (3DMM). However, existing methods do not model faces with fine-scale facial features, or local control of facial parts that extrapolate asymmetric expressions from monocular videos. Further, most condition only on 3DMM parameters with poor(er) locality, and re-solve local features with a global neural field. We build on part-based implicit shape models that decompose a global deformation field into local ones. Our novel formulation models multiple implicit deformation fields with local seman-tic rig-like control via 3DMM-based parameters, and repre-sentative facial landmarks. Further, we propose a local con-trol loss and attention mask mechanism that promote sparsity of each learned deformation field. Our formulation renders sharper locally controllable nonlinear deformations than previous implicit monocular approaches, especially mouth interior, asymmetric expressions, and facial details. Project page: https://imaging.cs.cmu.edu/local deformation fields/
|
1. Introduction Monocular human head avatar reconstruction is a long standing challenge that has drawn a lot of attention in the last few decades due to its wide application in movie mak-ing [11, 17, 43], and virtual reality [2, 3, 29], among others. Traditional reconstruction methods in production pipelines create animatable and detailed avatars, often represented as 3D rigs, from high-quality face scans with predefined ex-pressions and poses [56, 58]. However, such data is often expensive to acquire and process, and over the years has created the need for an easier capture pipeline, e.g., based on high-definition images, or videos of human subjects. With the advancements in deep learning, much effort has gone *Work was done while interning at Flawless AI.into learning neural 3D face representations from 2D im-ages and the research community has achieved impressive results [22, 27, 39, 45]. However, modeling 3D structures from 2D information alone is an ill-posed problem, which results in models that lack view consistency and details. Both traditional and neural reconstruction pipelines based on the parametric mesh representation, 3DMM [9], are ef-ficient, controllable, and well integrated into the graphics pipeline, though at the expense of lacking important facial features such as hair, eyes, and mouth interior. In the last couple of years, there has been a surge of research on gen-eralized implicit face representations, e.g., sign distance functions (SDFs) [34], neural radiance fields (NeRFs) [16] or hybrid volumetric representations [4], that allow accu-rate modeling of fine-grained facial, and non-facial features not possible with mesh-based representations alone, while preserving view-consistency. Recently, several implicit models for human head avatars from monocular videos have demonstrated great progress [1, 3, 12, 15, 18, 35, 36, 41, 47, 57, 67, 68]. Several employ facial parameters from an existing face tracker to condition a multi-layer perceptron (MLP) to model expression changes [3,12,18,47], use deformation fields to learn a mapping from an observed point deformed by expression to a point on the template face [1,35,67], or learn forward mapping functions that estimate implicit 3DMM functions, represented as facial deformation bases [15, 42, 68]. These approaches have done a good job in allowing control of expressions and poses, even for out-of-distribution training parameters. However, none of these approaches reconstruct animatable heads with high-fidelity details such as deep creases. Besides, since they heavily rely on creating implicit fields derived from linear 3DMMs, which are de-facto limited by global or large-scale expression decompositions, it is relatively difficult to control localized deformations at a finer level, e.g., wrinkles that form around eyes when winking. In this paper, we propose a novel approach that allows implicit 3D rig modeling, and local control of detailed facial deformations. We use expression parameters obtained from a 3DMM face tracker, e.g., DECA [10], but learn to model more local deformations beyond linear 3DMM. To this end, instead of learning a single global deformation field, we This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 416 Ground-Truth Reconstruction Depth Field Sum of Local Deformation Fields Selected Local Deformation Fields Figure 1. Main results on test samples. Our method models dynamic 3D deformations as an ensemble of local deformation fields, centered around 3D facial landmarks, shown as red, blue, and green dots in this example (details in Sec. 3). Our formulation synthesizes 3D neural heads from 2D videos with fine-grained geometric details, as shown in column 3 (depth field). break it into multiple local fields with varying spatial support, triggered by sparse facial landmarks, with weak supervision on the 3DMM expressions. The local deformation fields are represented by nonlinear function within a certain radius, and conditioned by tracked expression parameters weighted by an attention mask that filters redundant parameters that do not influence the landmarks. Finally, the local deformations are summed with distance-based weights, which are used to deform the global point to the canonical space, and retrieve radiance and density for volumetric rendering. While part-based field decomposition [46,63] approaches have been proposed, we demonstrate that decomposing im-plicit deformation fields into local fields improves repre-sentation capacity to model facial details. By filtering re-dundant input expression parameters to each local field and providing weak supervision via 3DMM, we achieve better, detailed local control, and modelling of asymmetric expres-sions (see Fig. 1). We provide qualitative and quantitative comparisons with state-of-the-art monocular head avatar synthesis methods and show that our approach reconstructs facial details more accurately, while improving local control of the face. In summary, our contributions are as follows: 1.A novel formulation that models local field deformation for implicit NeRF face rigs that provides fine-scale control with landmarks.
|
Hai_Shape-Constraint_Recurrent_Flow_for_6D_Object_Pose_Estimation_CVPR_2023
|
Abstract Most recent 6D object pose methods use 2D optical flow to refine their results. However, the general optical flow methods typically do not consider the target’s 3D shape in-formation during matching, making them less effective in 6D object pose estimation. In this work, we propose a shape-constraint recurrent matching framework for 6D ob-ject pose estimation. We first compute a pose-induced flow based on the displacement of 2D reprojection between the initial pose and the currently estimated pose, which em-beds the target’s 3D shape implicitly. Then we use this pose-induced flow to construct the correlation map for the following matching iterations, which reduces the matching space significantly and is much easier to learn. Further-more, we use networks to learn the object pose based on the current estimated flow, which facilitates the computation of the pose-induced flow for the next iteration and yields an end-to-end system for object pose. Finally, we optimize the optical flow and object pose simultaneously in a recurrent manner. We evaluate our method on three challenging 6D object pose datasets and show that it outperforms the state of the art significantly in both accuracy and efficiency.
|
1. Introduction 6D object pose estimation, i.e., estimating the 3D rota-tion and 3D translation of a target object with respect to the camera, is a fundamental problem in 3D computer vision and also a crucial component in many applications, includ-ing robotic manipulation [8] and augmented reality [34]. Most recent methods rely on pose refinement to obtain ac-curate pose results [16, 31, 52]. Typically, they first syn-thesize an image based on the rendering techniques [9, 38] according to the initial pose, then estimate dense 2D-to-2D correspondence between the rendered image and the input based on optical flow networks [46]. After lifting the esti-mated 2D optical flow to 3D-to-2D correspondence based on the target’s 3D shape, they can obtain a new refined pose using Perspective-n-Points (PnP) solvers [27]. Although this paradigm works well in general, it suffers from several weaknesses. First, the general optical flow Reference Dense FlowShape-unawareTarget 3D Shape Reference Dense FlowShape-unawareTarget 3D Shape Reference Dense FlowShape-unawareTarget 3D Shape Reference Dense FlowShape-unawareTarget 3D Shape Input Initialization Flow Flow warp Figure 1. The problem of optical flow in 6D pose estimation. Given an initial pose, one can estimate the dense 2D-to-2D cor-respondence (optical flow) between the input and the synthetic image rendered from the initial pose, and then lift the dense 2D matching to 3D-to-2D correspondence to obtain a new refined pose by PnP solvers (PFA-Pose [16]). However, the flow estimation does not take the target’s 3D shape into account, as illustrated by the warped image based on the estimated flow in the last figure, which introduces significant matching noise to pose solvers and is suboptimal for 6D object pose estimation. networks they use are mainly built on top of two assump-tions, i.e., the brightness consistency between two poten-tial matches and the smoothness of matches within a local neighbor [1]. These assumptions, however, are too general and do not have any clue about the target’s 3D shape in the context of 6D object pose estimation, making the potential matching space of every pixel unnecessarily large in the tar-get image. Second, the missing shape information during matching often results in flow results that do not respect the target shape, which introduces significant matching noise, as shown in Fig. 1. Third, this multi-stage paradigm trains a network that relies on a surrogate matching loss that does not directly reflect the final 6D pose estimation task [17], which is not end-to-end trainable and suboptimal. To address these problems, we propose a shape-constraint recurrent matching framework for 6D object pose estimation. It is built on top of the intuition that, in addition to the brightness consistency and smoothness constraint in classical optical flow solutions [2,35], the dense 2D match-ing should comply with the 3D shape of the target. We first build a 4D correlation volume between every pixel of the source image and every pixel of the target image, similar to RAFT [46]. While, instead of indexing from the correla-tion volume according to the current flow during the itera-tion, we propose indexing the correlation volume based on This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4831 Correlation Look-UpFlow RegressorRANSAC PnP Correlation Look-UpFlow RegressorPose Regressor {I1,I2}{I1,I2}/uni0302P/uni0302P Correlation Look-UpFlow RegressorRANSAC PnP Correlation Look-UpFlow RegressorPose Regressor {I1,I2}{I1,I2}/uni0302P/uni0302P (a) The standard strategy (b) Our strategy Figure 2. Different pose refinement paradigms. (a) Most pose refinement methods [16] rely on a recurrent architecture to estimate dense 2D flow between the rendered image I1and the real input image I2, based on a dynamically-constructed correlation map according to the flow results of the previous iteration. After the convergence of the flow network and lifting the 2D flow to a 3D-to-2D correspondence field, they use PnP solvers to compute a new refined pose ˆP. This strategy, however, has a large matching space for every pixel in constructing correlation maps, and optimizes a surrogate matching loss that does not directly reflect the final 6D pose estimation task. (b)By contrast, we propose optimizing the pose and flow simultaneously in an end-to-end recurrent framework with the guidance of the target’s 3D shape. We impose a shape constraint on the correlation map construction by forcing the construction to comply with the target’s 3D shape, which reduces the matching space significantly. Furthermore, we propose learning the object pose based on the current flow prediction, which, in turn, helps the flow prediction and yields an end-to-end system for object pose. a pose-induced flow, which is forced to contain only all the 2D reprojections of the target’s 3D shape and reduces the matching space of the correlation map construction signif-icantly. Furthermore, we propose to use networks to learn the object pose based on the current flow prediction, which facilitates the computation of the pose-induced flow for the next iteration and also removes the necessity of explicit PnP solvers, making our system end-to-end trainable and more efficient, as shown in Fig. 2(b). We evaluate our method on the challenging 6D object pose benchmarks, including LINEMOD [14], LINEMOD-Occluded [25], and YCB-V [50], and show that our method outperforms the state of the art significantly, and converges much more quickly.
|
Dong_Residual_Degradation_Learning_Unfolding_Framework_With_Mixing_Priors_Across_Spectral_CVPR_2023
|
Abstract To acquire a snapshot spectral image, coded aperture snapshot spectral imaging (CASSI) is proposed. A core problem of the CASSI system is to recover the reliable and fine underlying 3D spectral cube from the 2D measure-ment. By alternately solving a data subproblem and a prior subproblem, deep unfolding methods achieve good perfor-mance. However, in the data subproblem, the used sensing matrix is ill-suited for the real degradation process due to the device errors caused by phase aberration, distortion; in the prior subproblem, it is important to design a suitable model to jointly exploit both spatial and spectral priors. In this paper, we propose a Residual Degradation Learn-ing Unfolding Framework (RDLUF), which bridges the gap between the sensing matrix and the degradation process. Moreover, a Mix S2Transformer is designed via mixing pri-ors across spectral and spatial to strengthen the spectral-spatial representation capability. Finally, plugging the MixS2Transformer into the RDLUF leads to an end-to-end trainable neural network RDLUF-Mix S2. Experimental re-sults establish the superior performance of the proposed method over existing ones. Code is available: https: //github.com/ShawnDong98/RDLUF_MixS2
|
1. Introduction With the application of coded aperture snapshot spectral imaging (CASSI) [1, 22, 30, 34], it has become feasible to acquire a spectral image using a coded aperture and disper-sive elements to modulate the spectral scene. By capturing a multiplexed 2D projection of the 3D data cube, CASSI technique provides an efficient approach for acquiring spec-tral data. Nonetheless, the reconstruction of an accurate and detailed 3D hyperspectral image (HSI) cube from the 2D measurements poses a fundamental challenge for the CASSI system. Based on CASSI, various reconstruction techniques have This work was supported by the National Key Research and Devel-opment Program of China (No. 2019YFA0706604), the Natural Science Foundation (NSF) of China (Nos. 61976169, 62293483). * Corresponding author. Figure 1. Comparison of PSNR-Parameters with previous HSI re-construction methods. The PSNR (in dB) is plotted on the vertical axis, while memory cost parameters are represented on the hor-izontal axis. Our proposed Residual Degradation Learning Un-folding Framework with Mixing priors across Spatial and Spec-tral (RDLUF-Mix S2) Transformers outperforms previous meth-ods while requiring fewer parameters. been developed to reconstruct the 3D HSI cube from 2D measurements. These methods range from model-based techniques [15, 17, 18, 30, 33, 35, 38, 43], to end-to-end ap-proaches [5, 13, 16, 22, 23], and deep unfolding methods [14, 31, 32]. Among them, deep unfolding methods have demonstrated superior performance by transferring conven-tional iterative optimization algorithms into a series of deep neural network (DNN) blocks. Typically, the deep unfold-ing methods tackle a data subproblem and a prior subprob-lem iteratively. The data subproblem is highly related to the degrada-tion process. The ways to acquire the degradation matrix in the data subproblem can be classified into two types, the first directly uses the sensing matrix as the degradation ma-trix [19, 21, 32] and the other learns the degradation matrix using a neural network [14,24,44]. However, since the sens-ing matrix is obtained from the equidistant lasers of differ-ent wavelengths on the sensor, it cannot reflect the device errors caused by phase aberration, distortion and alignment of the continuous spectrum. Thus, the earlier kind does not take into account the gap between the sensing matrix and 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22262 the degradation process. In the latter type, directly model-ing the degradation process is challenging. Considering the challenge of optimizing the original, unreferenced mapping, it is preferable to focus on optimizing the residual mapping. Therefore, we explicitly model the degradation process as residual learning with reference to the sensing matrix. For the prior subproblem, a denoiser is trained to rep-resent the regularization term as a denoising problem in an implicit manner, typically implemented as an end-to-end neural network. Recently, Spectral-wise Multi-head Self-Attention (S-MSA) has been introduced to model long-range dependency in the spectral dimension. However, S-MSA may neglect spatial information that is crucial for gen-erating high-quality HSI images, due to its implicit model-ing of spatial dependency. To this end, the integration of Convolutional Neural Networks (CNNs) with S-MSA can provide an ideal solution as CNNs have the inductive bias of modeling local similarity, thus enhancing the spatial mod-eling capabilities of S-MSA. To achieve this, we propose a multiscale convolution branch that processes visual in-formation at multiple scales and then aggregates it to en-able simultaneous feature abstraction from different scales, thereby capturing more textures and details. In this paper, we first unfold the Proximal Gradient De-scent (PGD) algorithm under the framework of maximum a posteriori theory for HSI reconstruction. Then, we integrate the residual degradation learning strategy into the data sub-problem of PGD, which briges the gap between the sensing matrix and the degradation process, leading to our Resid-ual Degradation Learning Unfolding Framework (RDLUF). Secondly, a multiscale convolution called Lightweight In-ception is combined with spectral self-attention in a paral-lel design to address the problem of weak spatial modeling ability of S-MSA. To provide complementary clues in the spectral and spatial branches, we propose a bi-directional interaction across branches, which enhance the modeling ability in spectral and spatial dimensions respectively, re-sulting in our Mixing priors across Spatial and Spectral (MixS2) Transformer. Finally, plugging the Mix S2Trans-former into the RDLUF as the denoiser of the prior sub-problem leads to an end-to-end trainable neural network RDLUF-Mix S2. Equipped with the proposed techniques, RDLUF-Mix S2achieves state-of-the-art (SOTA) perfor-mance on HSI reconstruction, as shown in Fig. 1.
|
Han_High-Fidelity_Event-Radiance_Recovery_via_Transient_Event_Frequency_CVPR_2023
|
Abstract High-fidelity radiance recovery plays a crucial role in scene information reconstruction and understanding. Con-ventional cameras suffer from limited sensitivity in dynamic range, bit depth, and spectral response, etc. In this paper, we propose to use event cameras with bio-inspired silicon sensors, which are sensitive to radiance changes, to re-cover precise radiance values. We reveal that, under active lighting conditions, the transient frequency of event signals triggering linearly reflects the radiance value. We propose an innovative method to convert the high temporal reso-lution of event signals into precise radiance values. The precise radiance values yields several capabilities in image analysis. We demonstrate the feasibility of recovering radi-ance values solely from the transient event frequency (TEF) through multiple experiments.
|
1. Introduction Scene radiance recovery from pixel values of conven-tional frame-based cameras is challenging due to the lim-ited sensitivity of sensors and the non-linear process in the image signal processor. Event cameras like Dynamic Vision Sensor (DVS) [16] are designed with bio-inspired mechanism that measure the scene radiance changes in an asynchronous manner. Compared with conventional frame-based cameras, event cameras have superior advantages [7], such as very high temporal resolution (in the order of µs), high dynamic range (HDR, up to 120dB), low latency, low power consumption, etc. Due to the specific event triggering mechanism, event cameras only record radiance changes rather than the absolute radiance values. This makes it chal-lenging to directly apply computer vision algorithms de-signed for 2D images containing luminance values for most of the surface points to the data captured by event cameras. We propose an innovative method for high-fidelity radiance recovery, which benefits various spectroscopic and stereo-scopic vision fields, as shown in Fig. 1. † Corresponding author Project page: https://github.com/hjynwa/TEF Light source Target objects Concept Transient Event Frequency Event camera TimeNumber of events Hyperspectral imagingCapabilities Color image restorationDepth sensing Iso-depth contour reconstruction Broadband spectroscopic Precise stereoscopic y x tFigure 1. The concept of transient event frequency and its capabil-ities. When turning the light source on, an event camera captures event signals in a very short time, the frequency of event trigger-ing during this period encodes the scene radiance. The capabilities of transient event frequency are experimentally verified by vari-ous applications, including color image restoration, hyperspectral imaging, depth sensing, and iso-depth contour reconstruction. Since the event signals could be triggered from either lighting condition changes or object motions, there are many methods [14, 27, 29, 44] assuming a constant lighting condition to focus on motion. To acquire more dense event signals used for scene analysis, some researchers have pro-posed to use active lighting that can reflect the information about the entire scene from the event signals [20, 32]. Ac-tive lighting can be a transient state by turning on the light, which only takes less than 0.1second. Under such a circum-stance, the event signals are densely triggered for almost all surface points in the scene, rather than sparsely along the edges. Previous methods [6, 32] have shown that radiance can be recovered by integrating a period of events. How-ever, this approach may suffer from unreliability due to the presence of ghost signals that persist even after the radiance change has ceased [21], resulting in errors in the recovered radiance values. This is caused by the latency and noise This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20616 of event triggering, which make it difficult to determine the precise termination timestamp for signal accumulation. As a result, integrating events can be challenging when there are extremely large radiance changes. In this paper, a new approach for direct recovery of scene radiance from event signals is proposed. We overcome the instability and errors of event signals caused due to large lu-minance changes in a split second by analyzing the transient event frequency (TEF). Events are triggered in a constant frequency during the period of illumination increase, which linearly represents the radiance values of a point. As shown in Fig. 1, our approach transfers the high temporal resolu-tion of event signals into the precise radiance values, and thus inherits the advantages of event cameras, which have low latency, broader spectral and dynamic range response than conventional cameras. Furthermore, we can directly apply computer vision algorithms developed for 2D images to event-radiance images reconstructed from event signals, expanding the potential applications of event cameras. To the best of our knowledge, this study is the first to show that the high temporal resolution of event signals can be converted to relative radiance values by analyzing the triggering frequency under active lighting. Our contribu-tions are summarized as follows: 1) We propose the concept of TEF during the split second of turning light on. It precisely represents the relative radiance values in a scene, which is much more stable and accurate compared to the integration of events and the pixel values from conventional cameras. 2) We reveal the linear relationship between TEF and ra-diance values, which yields several capabilities, in-cluding color image restoration, hyperspectral mea-surement, depth sensing, and iso-depth contour recon-struction. 3) We calibrate the linearity of TEF and measure the spectral response function of the event camera. Mul-tiple experiments validate the broad response and ro-bust precision of radiance values recovered from TEF in spectroscopic and stereoscopic vision fields.
|
Jain_A_Data-Based_Perspective_on_Transfer_Learning_CVPR_2023
|
Abstract It is commonly believed that in transfer learning including more pre-training data translates into better performance. However, recent evidence suggests that removing data from the source dataset can actually help too. In this work, we take a closer look at the role of the source dataset’s compo-sition in transfer learning and present a framework for prob-ing its impact on downstream performance. Our framework gives rise to new capabilities such as pinpointing transfer learning brittleness as well as detecting pathologies such as data-leakage and the presence of misleading examples in the source dataset. In particular, we demonstrate that removing detrimental datapoints identified by our framework indeed improves transfer learning performance from ImageNet on a variety of target tasks.1
|
1. Introduction Transfer learning enables us to adapt a model trained on a source dataset to perform better on a downstream target task. This technique is employed in a range of machine learning applications including radiology [ 23,45], autonomous driv-ing [11,24], and satellite imagery analysis [ 44,47]. Despite its successes, however, it is still not clear what the drivers of performance gains brought by transfer learning actually are. So far, a dominant approach to studying these drivers focused on the role of the source model —i.e., the model trained on the source dataset. The corresponding works involve investigating the source model’s architecture [ 23], accuracy [ 27], adversarial vulnerability [ 42,43], and training procedure [ 21,30]. This line of work makes it clear that the properties of the source model has a significant impact on *Equal contribution. 1Code is available at https://github.com/MadryLab/data-transfertransfer learning. There is some evidence, however, that the source dataset might play an important role as well [ 18, 26,38]. For example, several works have shown that while increasing the size of the source dataset generally boosts transfer learning performance, removing specific classes can help too [ 18,26,38]. All of this motivates a natural question: How can we pinpoint the exact impact of the source dataset in transfer learning? Our Contributions. In this paper, we present a frame-work for measuring and analyzing the impact of the source dataset’s composition on transfer learning performance. To do this, our framework provides us with the ability to in-vestigate the counterfactual impact on downstream predic-tions of including or excluding datapoints from the source dataset, drawing inspiration from classical supervised learn-ing techniques such as influence functions [ 7,13,25] and datamodels [ 19]. Using our framework, we can: •Pinpoint what parts of the source dataset are most uti-lized by the downstream task. •Automatically extract granular subpopulations in the target dataset through projection of the fine-grained labels of the source dataset. •Surface pathologies such as source-target data leakage and mislabelled source datapoints. We also demonstrate how our framework can be used to find detrimental subsets of ImageNet [ 9] that, when removed, give rise to better downstream performance on a variety of image classification tasks. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3613
|
Hwang_Meta-Explore_Exploratory_Hierarchical_Vision-and-Language_Navigation_Using_Scene_Object_Spectrum_Grounding_CVPR_2023
|
Abstract The main challenge in vision-and-language navigation (VLN) is how to understand natural-language instructions in an unseen environment. The main limitation of conven-tional VLN algorithms is that if an action is mistaken, theagent fails to follow the instructions or explores unneces-sary regions, leading the agent to an irrecoverable path. To tackle this problem, we propose Meta-Explore, a hierarchi-cal navigation method deploying an exploitation policy tocorrect misled recent actions. We show that an exploitationpolicy, which moves the agent toward a well-chosen localgoal among unvisited but observable states, outperforms amethod which moves the agent to a previously visited state.We also highlight the demand for imagining regretful explo-rations with semantically meaningful clues. The key to ourapproach is understanding the object placements around theagent in spectral-domain. Specifically, we present a novelvisual representation, called scene object spectrum (SOS), which performs category-wise 2D F ourier transform of de-tected objects. Combining exploitation policy and SOS fea-tures, the agent can correct its path by choosing a promis-ing local goal. We evaluate our method in three VLN bench-marks: R2R, SOON, and REVERIE. Meta-Explore outper-forms other baselines and shows significant generalization performance. In addition, local goal search using the pro-posed spectral-domain SOS features significantly improvesthe success rate by 17.1% and SPL by 20.6% against the state-of-the-art method of the SOON benchmark. Project page:https://rllab-snu.github.io/projects/Meta-Explore/doc.html
|
1. Introduction Visual navigation in indoor environments has been stud-ied widely and shown that an agent can navigate in unex-This work was in part supported by Institute of Information & Communica-tions Technology Planning & Evaluation (IITP) grant funded by the Korea govern-ment (MSIT) (No. 2019-0-01190, [SW Star Lab] Robot Learning: Efficient, Safe, and Socially-Acceptable Machine Learning, 80%, and No.2022-0-00907, Develop-ment of AI Bots Collaboration Platform and Self-Organizing AI, 20%). (Corre-sponding authors: Y oonseon Oh and Songhwai Oh.) startlocal goalglobal goal l g t ݐ௧visited nodeunvisited, observable nodeInstruction: "Walk forward, keeping the long table to the left. Exit the room via the white door to the left of the stairs . Descend a narrow circular stairwell and wait, facing two windows with circular stained glass in their centers." Figure 1. Hierarchical Exploration. At each episode, a natural-language instruction is given to the agent to navigate to a goal lo-cation. The agent explores the environment and constructs a topo-logical map by recording visited nodes and next step reachable nodes . Each node consists of the position of the agent and visual features. otdenotes the observation at time t. The agent chooses an unvisited local goal to solve the regretful exploration problem. plored environments [ 1]. By recognizing the visual context and constructing a map, an agent can explore the environ-ment and solve tasks such as moving towards a goal or fol-lowing a desired trajectory. With the increasing developmentin human language understanding, vision-and-language nav-igation (VLN) [ 2] has enabled robots to communicate with humans using natural languages. The high degree of free-dom in natural language instructions allows VLN to expandto various tasks, including (1) following fine-grained step-by-step instructions [ 2–13] and (2) reaching a target location described by goal-oriented language instructions [ 14–20]. A challenging issue in VLN is the case when an action is mistaken with respect to the given language instruction[21–26]. For instance, if the agent is asked to turn right at the end of the hallway but turns left, the agent may end up in ir-recoverable paths. Several existing studies solve this issuevia hierarchical exploration, where the high-level planner decides when to explore and the low-level planner chooses what actions to take. If the high-level planner chooses to explore, the agent searches unexplored regions, and if it chooses to exploit, the agent executes the best action based on the previous exploration. Prior work [ 21–23] returns the agent to the last successful state and resumes exploration. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6683 However, such methods take a heuristic approach because the agent only backtracks to a recently visited location. Theagent does not take advantage of the constructed map andinstead naively uses its recent trajectory for backtracking.Another recent work [ 26] suggests graph-based exploitation, which uses a topological map to expand the action space inglobal planning. Still, this method assumes that the agent candirectly jump to a previously visited node. Since this method can perform a jump action at every timestep, there is no trig-ger that explicitly decides when to explore and when to ex-ploit. Therefore, we address the importance of time schedul-ing for exploration-exploitation and efficient global planning using a topological map to avoid reexploring visited regions. We expand the notion of hierarchical exploration by proposing Meta-Explore, which not only allows the high-level planner to choose when to correct misled local move-ments but also finds an unvisited state inferred to be close to the global goal. We illustrate the overview of hierarchi-cal exploration in Figure 1. Instead of backtracking, we present an exploitation method called local goal search. Weshow that it is more efficient to plan a path to a local goal,which is the most promising node from the unvisited but reachable nodes. We illustrate the difference between con-ventional backtracking and local goal search in Figure 2. Based on our method, we show that exploration and ex-ploitation are not independent and can complement eachother: (1) to overtake regretful explorations, the agent canperform exploitation and (2) the agent can utilize the con-structed topological map for local goal search. We also highlight the demand for imagining regretful explorations with semantically meaningful clues. Most VLN tasks re-quire a level of understanding objects nearby the agent, but previous studies simply encode observed panoramic or ob-ject images [ 2,3,16–18,21–35]. In this paper, we present a novel semantic representation of the scene called scene object spectrum (SOS), which is a matrix containing the ar-rangements and frequencies of objects from the visual ob-servation at each location. Using SOS features, we can suf-ficiently estimate the context of the environment. We show that the proposed spectral-domain SOS features manifestbetter linguistic interpretability than conventional spatial-domain visual features. Combining exploitation policy andSOS features, we design a navigation score that measuresthe alignment between a given language instruction and acorrected trajectory toward a local goal. The agent com-pares local goal candidates and selects a near-optimal can-didate with the highest navigation score from corrected tra-jectories. This involves high-level reasoning related to thelandmarks (e.g., bedroom and kitchen) and objects (e.g., ta-ble and window) that appear in the instructions. The main contributions of this paper are as follows: • We propose a hierarchical navigation method called Meta-Explore, deploying an exploitation policy to cor-“Which visited node” “Which unvisited node” “is the most likely to be a local goal ?”current position startlocal goalglobal goal visited node unvisited node searchable area area < Figure 2. Local Goal Search for Exploitation. The local goal is likely to be chosen as the closest node to the global goal. Existing methods only backtrack to a visited node (left). We expand the searchable area by including unvisited but reachable nodes (right). rect misled recent actions. The agent searches for an appropriate local goal instead of reversing the recentaction sequence. • In the exploitation mode, the agent uses a novel scene representation called scene object spectrum (SOS), which contains the spectral information of the object placements in the scene. SOS features provide seman-tically meaningful clues to choose a near-optimal localgoal and help the agent to solve the regretful explo-ration problem. • We evaluate our method on three VLN benchmarks: R2R [ 2], SOON [ 16], and REVERIE [ 17]. The exper-imental results show that the proposed method, Meta-Explore, improves the success rate and SPL in testsplits of R2R, SOON and val split of REVERIE. The proposed method shows better generalization results compared to all baselines.
|
Chen_Improved_Test-Time_Adaptation_for_Domain_Generalization_CVPR_2023
|
Abstract The main challenge in domain generalization (DG) is to handle the distribution shift problem that lies between the training and test data. Recent studies suggest that test-time training (TTT), which adapts the learned model with test data, might be a promising solution to the problem. Gen-erally, a TTT strategy hinges its performance on two main factors: selecting an appropriate auxiliary TTT task for up-dating and identifying reliable parameters to update during the test phase. Both previous arts and our experiments in-dicate that TTT may not improve but be detrimental to the learned model if those two factors are not properly consid-ered. This work addresses those two factors by proposing anImproved Test-TimeAdaptation (ITTA) method. First, in-stead of heuristically defining an auxiliary objective, we pro-pose a learnable consistency loss for the TTT task, which con-tains learnable parameters that can be adjusted toward bet-ter alignment between our TTT task and the main prediction task. Second, we introduce additional adaptive parameters for the trained model, and we suggest only updating the adap-tive parameters during the test phase. Through extensive ex-periments, we show that the proposed two strategies are ben-eficial for the learned model (see Figure 1), and ITTA could achieve superior performance to the current state-of-the-art methods on several DG benchmarks. Code is available at https://github.com/liangchen527/ITTA .
|
1. Introduction Recent years have witnessed the rapid development of deep learning models, which often assume the training and test data are from the same domain and follow the same distribution. However, this assumption does not always hold in real-world scenarios. Distribution shift among the source and target domains is ubiquitous in related areas [35], such as autonomous driving or object recognition tasks, resulting *Corresponding authors. This work is done when L. Chen is an intern in Tencent AI Lab. 0.51.10.51.2 0.50.50.51.4 0.40.40.40.3 artcartoonphotosketch 79.975.494.475.8 83.376.094.476.7 84.778.094.578.2 Figure 1. Performance improvements from the proposed two strate-gies ( i.e. introducing a learnable consistency loss and including additional adaptive param eters to improve TTT) for the baseline model ( i.e. ResNet18 [30] with existing augmentation strategy [74]). Experiments are conducted on the PACS dataset [37] with the leave-one-out setting. Following [27], we use 60 sets of random seeds and hyper-parameters for each target domain. The reported average accuracy and error bars verify the effectiveness of our method. in poor performances for delicately designed models and hindering the further application of deep learning techniques. Domain generalization (DG) [2,8,16,23,24,31,38 –40,40, 44, 46, 50, 51, 68], designed to generalize a learned model to unseen target domains, has attracted a great deal of attention in the research community. The problem can be traced back to a decade ago [7], and various approaches have been pro-posed to push the DG boundary ever since. Those efforts in-clude invariant representation learning [28,46,48,57], adver-sarial learning [23,40,44,68], augmentation [9,41,42,65,74], or meta-learning [2, 16, 38, 39]. Despite successes on certain occasions, a recent study [27] shows that, under a rigorous evaluation protocol, most of these arts are inferior to the baseline empirical risk minimization (ERM) method [60]. This finding is not surprising, as most current arts strive to decrease the distribution shift only through the training data while overlooking the contributions from test samples. Recently, the test-time training (TTT) technique [59] has been gaining momentum for easing the distribution shift problem. TTT lies its success in enabling dynamic tuning of the pretrained model with the test samples via an auxil-iary TTT task, which seems to be a promising effort when This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24172 confronting data from different domains. However, TTT is not guaranteed to improve the performance. Previous arts [45, 62] indicate that selecting an appropriate auxiliary TTT task is crucial, and an inappropriate one that does not align with the main loss may deteriorate instead of improv-ing the performance. Meanwhile, it is pointed out in [62] that identifying reliable parameters to update is also essential for generalization, which is in line with our experimental findings in Sec. 5.3. Both of these two tasks are non-trivial, and there are limited efforts made to address them. This paper aims to improve the TTT strategy for better DG. First, different from previous works that empirically define auxiliary objectives and assume they are aligned with the main task, our work does not make such assumptions. Instead, we suggest learning an appropriate auxiliary loss for test-time updating. Specifically, encouraged by recent successes in multi-view consistency learning [13,26,29], we propose to augment the consistency loss by adding learn-able parameters based on the original implementation, where the parameters can be adjusted to assure our TTT task can be more aligned with the main task and are updated by en-forcing the two tasks share the same optimization direction. Second, considering that identifying reliable parameters to update is an everlasting job given the growing size of current deep models, we suggest introducing new adaptive param-eters after each block during the test phase, and we only tune the new parameters by the learned consistency loss while leaving the original parameters unchanged. Through extensive evaluations on the current benchmark [27], we illustrate that the learnable consistency loss performs more effectively than the self-supervised TTT tasks adopted in previous arts [59, 62], and by tuning only the new adaptive parameters, our method is superior to existing strategies that update all the parameters or part of them. This work aims to ease the distribution shift problem by improving TTT, and the main contributions are three-fold: •We introduce a learnable consistency loss for test-time adaptation, which can be enforced to be more aligned with the main loss by tuning its learnable parameters. •We introduce new adaptive parameters for the trained model and only update them during the test phase. •We conduct experiments on various DG benchmarks and illustrate that our ITTA performs competitively against current arts under the rigorous setting [27] for both the multi-source and single-source DG tasks.
|
Chen_DPF_Learning_Dense_Prediction_Fields_With_Weak_Supervision_CVPR_2023
|
Abstract Nowadays, many visual scene understanding problems are addressed by dense prediction networks. But pixel-wise dense annotations are very expensive (e.g., for scene pars-ing) or impossible (e.g., for intrinsic image decomposition), motivating us to leverage cheap point-level weak supervi-sion. However, existing pointly-supervised methods still use the same architecture designed for full supervision. In stark contrast to them, we propose a new paradigm that makes predictions for point coordinate queries , as inspired by the recent success of implicit representations, like distance or radiance fields. As such, the method is named as dense pre-diction fields (DPFs). DPFs generate expressive interme-diate features for continuous sub-pixel locations, thus al-lowing outputs of an arbitrary resolution. DPFs are nat-urally compatible with point-level supervision. We show-case the effectiveness of DPFs using two substantially dif-ferent tasks: high-level semantic parsing and low-level in-trinsic image decomposition. In these two cases, supervi-sion comes in the form of single-point semantic category and two-point relative reflectance, respectively. As bench-marked by three large-scale public datasets PASCALCon-text, ADE20K and IIW, DPFs set new state-of-the-art per-formance on all of them with significant margins. Code can be accessed at https://github.com/cxx226/DPF .
|
1. Introduction The field of visual scene understanding aims to recover various scene properties from input images, e.g., seman-tic labels [24], depth values [49] [66], edge existence [1] or action affordance [10]. Successful and comprehensive scene understanding is the cornerstone of various emerging artificial intelligence applications, like autonomous driving, intelligent robots or smart manufacturing. Albeit difficult, this field has seen great progress thanks to end-to-end dense prediction networks like DPT [48] and large-scale densely-labelled datasets like ADE20K [67]. If we can densely label InputDense Prediction Backbone (b) DPF (ours) Guidance Encoder (x,y)Latent code Dense Prediction Backbone Guidance InputImplicit Function (a) General formulationPoint query A B: :same reflectance A is darker (d) Pairwise reflectance comparison (c) Point semantic annotation :cat :cloth :tree :others QueryFigure 1. (a) Existing dense prediction formulation. (b) Our DPF formulation. (c) Semantic annotation for single points. (d) Pair-wise reflectance annotation between two points. every property that we care about, totally solving the visual scene understanding problem seems a matter of time. However, dense annotations are usually too expensive or impossible to obtain. According to the Cityscapes pa-per [13], it takes 1.5 hours to generate a high-quality seman-tic annotation map for a single image. What’s worse, for the problem of decomposing an image into reflectance and shading1, it’s impossible for humans to provide pixel-wise ground truth values. As such, the largest intrinsic image decomposition dataset IIW [7] is annotated in the form of pair-wise reflectance comparison between two points. An-notators are guided to judge whether the reflectance of one point is darker than that of another point or not. Given the importance of dense prediction and the diffi-culty of obtaining dense annotations, we focus on learning with point-level weak supervision. Fig. 1-c shows an exam-ple of point-level semantic scene parsing annotation. The sole red point on the cat is annotated as cat, which is much more cheaper than delineating the cat’s contours. Fig. 1-d shows human judgement of relative reflectance annotation between every pair of two points. Since the floor has a con-stant reflectance, point pairs on the floor are annotated with 1Intrinsic image decomposition. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15347 theequal label. Since the table has a darker reflectance than the floor, pairs between the table point and the floor point are annotated with the darker label. How could we effectively learn dense prediction mod-els from these kinds of point-level weak supervision? To this end, existing pointly-supervised methods leverage un-labelled points using various techniques like online expan-sion [47], uncertainty mixture [64] [57] or edge guidance [18]. But they all exploit conventional formulations shown in Fig. 1-a, by converting point-level supervision into dense ground truth maps with padded ignore values . By contrast, we seek alternative network architectures that are naturally compatible with point-level supervision. Specifically, we take inspiration from the success of neural implicit repre-sentations. DeepSDF [46] takes 3D coordinates as input and predicts signed distance values. NeRF [41] takes 5D coordinates as input and predicts radiance/transparency val-ues. Similarly, our method takes 2D coordinates as input and predicts semantic label or reflectance values, as shown in Fig. 1-b. An intriguing feature of this new scheme is that high-resolution images can be encoded as guidance in a nat-ural way, because this new continuous formulation allows outputs of arbitrarily large or small resolution. Borrowing names from the research community of distance or radiance fields, our method is called dense prediction fields (DPFs) . In order to show that DPF is a strong and generic method, we use two pointly-supervised tasks: semantic scene pars-ing and intrinsic image decomposition. These two tasks dif-fer in many aspects: (1) Scene parsing is a high-level cog-nitive understanding task while intrinsic decomposition is a low-level physical understanding task; (2) Scene parsing outputs discrete probability vectors while intrinsic decom-position outputs continuous reflectance/shading values; (3) Scene parsing is annotated with single points while intrinsic decomposition is annotated with two-point pairs. Interest-ingly and surprisingly, our method achieves new state-of-the-art results on both of them, as benchmarked by three widely used datasets PASCALContext, ADE20K and IIW. To summarize, the contributions of our work include: • We propose a novel methodology for learning dense prediction models from point-level weak supervision, named DPF. DPF takes 2D coordinates as inputs and allows outputs of an arbitrary resolution. • We set new state-of-the-art performance on PASCAL-Context and ADE20K datasets for scene parsing and IIW dataset for intrinsic decomposition with point-level weak supervision. Codes are publicly available. • With systematic ablations, visualization and analysis, we delve into the mechanism of DPF and reveal that its superior performance is credited to locally smooth embeddings and high-resolution guidance.2. Related Work
|
Li_Azimuth_Super-Resolution_for_FMCW_Radar_in_Autonomous_Driving_CVPR_2023
|
Abstract
We tackle the task of Azimuth (angular dimension) super-
resolution for Frequency Modulated Continuous Wave
(FMCW) multiple-input multiple-output (MIMO) radar.
FMCW MIMO radar is widely used in autonomous driv-
ing alongside Lidar and RGB cameras. However, com-
pared to Lidar, MIMO radar is usually of low resolution
due to hardware size restrictions. For example, achieving
1◦azimuth resolution requires at least 100 receivers, but a
single MIMO device usually supports at most 12 receivers.
Having limitations on the number of receivers is problem-
atic since a high-resolution measurement of azimuth angle
is essential for estimating the location and velocity of ob-
jects. To improve the azimuth resolution of MIMO radar,
we propose a light, yet efficient, Analog-to-Digital super-
resolution model (ADC-SR) that predicts or hallucinates
additional radar signals using signals from only a few re-
ceivers. Compared with the baseline models that are ap-
plied to processed radar Range-Azimuth-Doppler (RAD)
maps, we show that our ADC-SR method that processes raw
ADC signals achieves comparable performance with 98%
(50 times) fewer parameters. We also propose a hybrid
super-resolution model (Hybrid-SR) combining our ADC-
SR with a standard RAD super-resolution model, and show
that performance can be improved by a large margin. Ex-
periments on our Pitt-Radar dataset and the RADIal dataset
validate the importance of leveraging raw radar ADC sig-
nals. To assess the value of our super-resolution model for
autonomous driving, we also perform object detection on
the results of our super-resolution model and find that our
super-resolution model improves detection performance by
around 4% in mAP . The Pitt-Radar and the code will be
released at the link.
|
1. Introduction
We address the task of azimuth angle super-resolution
for Frequency Modulated Continuous Wave (FMCW) [25]
Multiple Input Multiple Output (MIMO) [26] radar in au-
tonomous driving. In addition to Lidar and RGB cam-
Previous workOurs
Range-Azimuth (RA) Super ResolutionPredicted uncaptured ADC signalsADCSuperResolutionInput ADC signals
AntennasHallucinated antennas
Low-resolutionRA maps
Hight-resolutionRA mapsFFTFFTFigure 1. Previous work perform super-resolution from low-
resolution and high-resolution Range-Azimuth maps after apply-
ing Discrete Time Fourier Transform (FFT). Instead, we aim to
predict more ADC uncaptured signals before they are transformed
into Range-Azimuth (RA) maps.
eras, Radar has been commonly used for autonomous ve-
hicles [4, 5] due to its robustness in adverse weather condi-
tions ( e.g., fog, snow, rain) with longer wavelength. FMCW
MIMO radar uses a line array of receiver antennas to cap-
ture reflected signals of multiple chirps sent out by a line
array of transmitters. We can characterize the location
(orientation and distance) and velocity of nearby objects
from a radar’s Range-Azimuth-Doppler (RAD) map. The
RAD map is computed using the Discrete Time Fast Fourier
Transform (FFT) [11, 26] on the discretized receiver sig-
nals after Analog-to-Digital conversion (ADC). In the do-
main of autonomous driving, MIMO [26] radar devices typ-
ically have low resolution due to the physical constraint on
the size of the sensors. For example, a device with eight
antenna receivers has at most an angle resolution of about
15◦[11]. Thus, it is important to develop new technologies
which can increase the azimuth resolution of radar sensing,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17504
without requiring a large sensor size.
For MIMO radar [26], both range resolution and velocity
(Doppler) resolution can be improved by setting different
bandwidths (the range of frequencies used for signal pulses)
and frame times (duration of a single signal pulse). How-
ever, azimuth resolution is strictly dependent on the radar
sensor’s hardware specifications, such as antenna size, and
cannot be improved by changing the parameters of the radar
sensor [15, 34], which restricts radar imaging performance.
To improve the azimuth resolution of radar Range-Azimuth
(RA) maps, some work proposes to use regularization ap-
proaches from signal processing [31, 32, 35] and deep mod-
els [2, 10] for azimuth super-resolution. However, such
prior work is limited to processing the RA maps (not even
using Doppler information) as shown in Figure 1. Since
RAD and RA maps are the transformed data after keep-
ing the magnitude only after FFTs, the information loss re-
garding the relationship between each receiver may lead to
limited performance for azimuth super-resolution. In addi-
tion, how to enhance the azimuth resolution from process-
ing ADC signals also remains challenging.
In order to tackle the challenge of azimuth angle super-
resolution on the ADC signals, we propose a light, yet
efficient, ADC super-resolution (ADC-SR) model for az-
imuth resolution. Since the azimuth resolution is related
to the number of receivers each capturing independent sig-
nals, we aim to predict uncaptured signals from halluci-
nated receivers. For example, the model takes ADC signals
recorded with 4 receivers and outputs the predicted ADC
signals with 8 receivers, as shown in Figure 1. To the best
of our knowledge, our ADC-SR is the first to apply deep
models to ADC signals for azimuth super-resolution. In ad-
dition, the hallucinated ADC signals can then be processed
with FFTs [26] to obtain high-resolution RAD maps for fur-
ther use in autonomous driving such as object detection.
We note that we can further refine the RAD maps with the
RAD super-resolution (RAD-SR) model. To evaluate and
compare our approach with other baseline models, we have
collected a dataset named Pitt-Radar which contains ADC
signals. Compared with RAD-SR which relies on RAD
data, our ADC-SR achieves comparable performance with
fewer network parameters. We also show that our hybrid
pipeline named Hybrid-SR combining ADC-SR and RAD-
SR improves the baseline model by a large margin. More-
over, since naive bilinear downsampling of the RAD map
does not truly reflect the outputs of lower-resolution radar
sensors, we propose a more theoretically grounded way of
downsampling for training later.
To assess the value of our super-resolution model for
autonomous driving, we also evaluate the performance
of existing object detectors trained along with our super-
resolution models. The improvements in the experiments
demonstrate our approach is applicable to downstream tasksin autonomous driving. The contributions of this paper can
be summarized as follows:
• We propose an azimuth super-resolution model named
ADC-SR which takes into complex ADC radar signals
and predicts the signals from unseen receivers, which
is able to produce higher-resolution RAD maps.
• We propose a hybrid model named Hybrid-SR for im-
proved performance on RAD-SR when combined with
our ADC-SR. To make comparisons with all of the
baseline models, a downsampling method is also pro-
posed for evaluation.
• We propose a MIMO radar dataset named Pitt-Radar
which contains ADC signals for benchmarking.
• Our developed model not only achieves satisfactory
azimuth super-resolution on our collected Pitt-Radar
and one benchmark dataset but also improves the ex-
isting object detector by a large margin.
|
Li_Mask_DINO_Towards_a_Unified_Transformer-Based_Framework_for_Object_Detection_CVPR_2023
|
Abstract
In this paper we present Mask DINO, a unified object
detection and segmentation framework. Mask DINO extends
DINO (DETR with Improved Denoising Anchor Boxes) by
adding a mask prediction branch which supports all im-
age segmentation tasks (instance, panoptic, and semantic).
It makes use of the query embeddings from DINO to dot-
product a high-resolution pixel embedding map to predict
a set of binary masks. Some key components in DINO are
extended for segmentation through a shared architecture
and training process. Mask DINO is simple, efficient, and
scalable, and it can benefit from joint large-scale detec-
tion and segmentation datasets. Our experiments show that
Mask DINO significantly outperforms all existing special-
ized segmentation methods, both on a ResNet-50 backbone
and a pre-trained model with SwinL backbone. Notably,
Mask DINO establishes the best results to date on instance
segmentation (54.5 AP on COCO), panoptic segmentation
(59.4 PQ on COCO), and semantic segmentation (60.8 mIoU
on ADE20K) among models under one billion parameters.
Code is available at https://github.com/IDEA-
Research/MaskDINO .
|
1. Introduction
Object detection and image segmentation are fundamen-
tal tasks in computer vision. Both tasks are concerned with
localizing objects of interest in an image but have differ-
ent levels of focus. Object detection is to localize objects
of interest and predict their bounding boxes and category
*Equal contribution.
†Work done when Feng Li and Hao Zhang were interns at IDEA.
‡Corresponding author.labels, whereas image segmentation focuses on pixel-level
grouping of different semantics. Moreover, image segmenta-
tion encompasses various tasks including instance segmen-
tation, panoptic segmentation, and semantic segmentation
with respect to different semantics, e.g., instance or category
membership, foreground or background category.
Remarkable progress has been achieved by classical
convolution-based algorithms developed for these tasks with
specialized architectures, such as Faster RCNN [24] for ob-
ject detection, Mask RCNN [9] for instance segmentation,
and FCN [21] for semantic segmentation. Although these
methods are conceptually simple and effective, they are tai-
lored for specialized tasks and lack the generalization ability
to address other tasks. The ambition to bridge different tasks
gives rise to more advanced methods like HTC [2] for object
detection and instance segmentation and Panoptic FPN [14],
K-net [33] for instance, panoptic, and semantic segmenta-
tion. Task unification not only helps simplify algorithm
development but also brings in performance improvement in
multiple tasks.
Recently, DETR-like [1] models developed based on
Transformers [27] have achieved inspiring progress on many
detection and segmentation tasks. As an end-to-end object
detector, DETR adopts a set-prediction objective and elimi-
nates hand-crafted modules such as anchor design and non-
maximum suppression. Although DETR addresses both the
object detection and panoptic segmentation tasks, its segmen-
tation performance is still inferior to classical segmentation
models. To improve the detection and segmentation perfor-
mance of Transformer-based models, researchers have devel-
oped specialized models for object detection [15,18,32,35],
image segmentation [3, 4, 33], instance segmentation [7],
panoptic segmentation [23], and semantic segmentation [12].
Among the efforts to improve object detection,
DINO [32] takes advantage of the dynamic anchor box for-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3041
mulation from DAB-DETR [18] and query denoising train-
ing from DN-DETR [15], and further achieves the SOTA
result on the COCO object detection leaderboard for the
first time as a DETR-like model. Similarly, for improving
image segmentation, MaskFormer [4] and Mask2Former [3]
propose to unify different image segmentation tasks using
query-based Transformer architectures to perform mask clas-
sification. Such methods have achieved remarkable perfor-
mance improvement on multiple segmentation tasks.
However, in Transformer-based models, the best-
performing detection and segmentation models are still not
unified, which prevents task and data cooperation between
detection and segmentation tasks. As an evidence, in CNN-
based models, Mask-R-CNN [9] and HTC [2] are still widely
acknowledged as unified models that achieve mutual cooper-
ation between detection and segmentation to achieve superior
performance than specialized models. Though we believe
detection and segmentation can help each other in a unified
architecture in Transformer-based models, the results of sim-
ply using DINO for segmentation and using Mask2Former
for detection indicate that they can not do other tasks well, as
shown in Table 1 and 2. Moreover, trivial multi-task training
can even hurt the performance of the original tasks. It nat-
urally leads to two questions: 1) why cannot detection and
segmentation tasks help each other in Transformer-based
models? and 2) is it possible to develop a unified architecture
to replace specialized ones?
To address these problems, we propose Mask DINO,
which extends DINO with a mask prediction branch in par-
allel with DINO’s box prediction branch. Inspired by other
unified models [3, 4, 28] for image segmentation, we reuse
content query embeddings from DINO to perform mask clas-
sification for all segmentation tasks on a high-resolution
pixel embedding map (1/4 of the input image resolution)
obtained from the backbone and Transformer encoder fea-
tures. The mask branch predicts binary masks by simply
dot-producting each content query embedding with the pixel
embedding map. As DINO is a detection model for region-
level regression, it is not designed for pixel-level alignment.
To better align features between detection and segmentation,
we also propose three key components to boost the segmenta-
tion performance. First, we propose a unified and enhanced
query selection. It utilizes encoder dense prior by predicting
masks from the top-ranked tokens to initialize mask queries
as anchors. In addition, we observe that pixel-level segmen-
tation is easier to learn in the early stage and propose to use
initial masks to enhance boxes, which achieves task cooper-
ation. Second, we propose a unified denoising training for
masks to accelerate segmentation training. Third, we use a
hybrid bipartite matching for more accurate and consistent
matching from ground truth to both boxes and masks.
Mask DINO is conceptually simple and easy to imple-
ment under the DINO framework. To summarize, our contri-butions are three-fold. 1)We develop a unified Transformer-
based framework for both object detection and segmentation.
As the framework is extended from DINO, by adding a mask
prediction branch, it naturally inherits most algorithm im-
provements in DINO including anchor box-guided cross
attention, query selection, denoising training, and even a
better representation pre-trained on a large-scale detection
dataset. 2)We demonstrate that detection and segmenta-
tion can help each other through a shared architecture de-
sign and training method. Especially, detection can signifi-
cantly help segmentation tasks, even for segmenting back-
ground "stuff" categories. Under the same setting with a
ResNet-50 backbone, Mask DINO outperforms all exist-
ing models compared to DINO ( +0.8AP on COCO de-
tection) and Mask2Former ( +2.6AP, +1.1PQ, and +1.5
mIoU on COCO instance, COCO panoptic, and ADE20K
semantic segmentation). 3)We also show that, via a uni-
fied framework, segmentation can benefit from detection
pre-training on a large-scale detection dataset. After de-
tection pre-training on the Objects365 [26] dataset with a
SwinL [20] backbone, Mask DINO significantly improves
all segmentation tasks and achieves the best results on in-
stance ( 54.5 AP on COCO), panoptic ( 59.4 PQ on COCO),
and semantic ( 60.8 mIoU on ADE20K) segmentation among
models under one billion parameters.
|
Kwon_Renderable_Neural_Radiance_Map_for_Visual_Navigation_CVPR_2023
|
Abstract
We propose a novel type of map for visual navigation,
a renderable neural radiance map (RNR-Map), which is
designed to contain the overall visual information of a 3D
environment. The RNR-Map has a grid form and consists
of latent codes at each pixel. These latent codes are embed-
ded from image observations, and can be converted to the
neural radiance field which enables image rendering given
a camera pose. The recorded latent codes implicitly contain
visual information about the environment, which makes the
RNR-Map visually descriptive. This visual information in
RNR-Map can be a useful guideline for visual localization
and navigation. We develop localization and navigation
frameworks that can effectively utilize the RNR-Map. We
evaluate the proposed frameworks on camera tracking, vi-
sual localization, and image-goal navigation. Experimental
results show that the RNR-Map-based localization frame-
work can find the target location based on a single query
image with fast speed and competitive accuracy compared to
other baselines. Also, this localization framework is robust to
environmental changes, and even finds the most visually sim-
ilar places when a query image from a different environment
is given. The proposed navigation framework outperforms
the existing image-goal navigation methods in difficult sce-
narios, under odometry and actuation noises. The navigation
framework shows 65.7% success rate in curved scenarios of
the NRNS [21] dataset, which is an improvement of 18.6%
over the current state-of-the-art. Project page: https:
//rllab-snu.github.io/projects/RNR-Map/
|
1. Introduction
In this paper, we address how to explicitly embed the vi-
sual information from a 3D environment into a grid form and
how to use it for visual navigation. We present renderable
neural radiance map (RNR-Map) , a novel type of a grid map
for navigation.We point out three main properties of RNR-
Map which make RNR-Map navigation-friendly. First, it is
*This work was supported by Institute of Information & Communica-
tions Technology Planning & Evaluation (IITP) grant funded by the Korea
government (MSIT) (No. 2019-0-01190, [SW Star Lab] Robot Learning:
Efficient, Safe, and Socially-Acceptable Machine Learning). (Correspond-
ing author: Songhwai Oh)visually descriptive . Commonly used grid-based maps such
as occupancy maps [10,11,17] and semantic maps [9,19,36],
record obstacle information or object information into grids.
In contrast, RNR-Map converts image observations to latent
codes which are then embedded in grid cells. Each latent
code in a grid cell can be converted to a neural radiance
field, which can render the corresponding region. We can
utilize the implicit visual information of these latent codes
to understand and reason about the observed environment.
For example, we can locate places based on an image or
determine which region is the most related to a given im-
age. RNR-Map enables image-based localization only with
a simple forward pass in a neural network, by directly uti-
lizing the latent codes without rendering images. We build
a navigation framework with RNR-Map, to navigate to the
most plausible place given a query image. Through exten-
sive experiments, we validate that the latent codes can serve
as important visual clues for both image-based localization
and image-goal navigation. More importantly, a user has an
option to utilize the renderable property of RNR-Map for
more fine-level of localization such as camera tracking.
RNR-Map is generalizable . There have been a number
of studies that leverage neural radiance fields (NeRF) for
various applications other than novel view synthesis. The
robotic applications of NeRF are also now beginning to
emerge [1, 15, 28, 35, 42]. However, many of the approaches
require pretrained neural radiance fields about a specific
scene and are not generalizable to various scenes. This can
be a serious problem when it comes to visual navigation
tasks, which typically assume that an agent performs the
given task in an unseen environment [16]. In contrast, RNR-
Map is applicable in arbitrary scenes without additional opti-
mization. Even with the unseen environment, the RNR-Map
can still embed the useful information from images to the
map and render images. The neural radiance fields of RNR-
Map are conditioned on the latent codes. A pair of encoder
and decoder is trained to make these latent codes from im-
ages of arbitrary scenes and reconstruct images using neural
radiance fields. These pretrained encoder and decoder enable
the generalization to unseen environments.
Third, RNR-Map is real-time capable . The majority of
the present NeRF-based navigation methods require a sig-
nificant time for inference because of required computation-
heavy image rendering and rendering-based optimization
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9099
steps. The RNR-Map is designed to operate fast enough not
to hinder the navigation system. By directly utilizing the
latent codes, we can eliminate the rendering step in mapping
and localization. The mapping and image-based localization
frameworks operate at 91.9Hz and 56.8Hz, respectively. The
only function which needs rendering-based optimization is
camera tracking, which can localize under odometry noises,
and it operates at 5Hz.
To the best of our knowledge, the RNR-Map is the first
method having all three of the aforementioned characteris-
tics as a navigation map. The RNR-Map and its localization
and navigation frameworks are evaluated in various visual
navigation tasks, including camera tracking, image-based lo-
calization, and image-goal navigation. Experimental results
show that the proposed RNR-Map serves as an informative
map for visual navigation. Our localization framework ex-
hibits competitive localization accuracy and inference speed
when compared to existing approaches. On the image-goal
navigation task, the navigation framework displays 65.7%
success rate in curved scenarios of the NRNS [21] dataset,
where the current state-of-the-art method [37] shows a suc-
cess rate of 55.4%.
As RNR-Map finds a place based on the visual informa-
tion of the map and the query image, we also consider a
variant version of image-based localization. In real-world
scenarios, there can be partial changes in the target place
(changes in furniture placement, lighting conditions, ...).
Also, the user might only have images from similar but
different environments. We test the proposed localization
framework in both cases. We find that the RNR-Map is ro-
bust to environmental changes and is able to find the most
visually similar places even when a novel query image from
a different environment is provided.
The contributions of this paper can be summarized as
follows:
•We present RNR-Map, a novel type of renderable grid
map for navigation, utilizing neural radiance fields for
embedding the visual appearance of the environment.
•We demonstrate efficient and effective methods for uti-
lizing the visual information in RNR-Map for searching
an image goal by developing RNR-Map-based localiza-
tion and navigation framework.
•Extensive experiments show that the proposed method
shows the state-of-the-art performance in both localiza-
tion and image-goal navigation.
|
Kumar_Few-Shot_Referring_Relationships_in_Videos_CVPR_2023
|
Abstract
Interpreting visual relationships is a core aspect of com-
prehensive video understanding. Given a query visual re-
lationship as <subject, predicate, object >and a test video,
our objective is to localize the subject and object that are
connected via the predicate. Given modern visio-lingual
understanding capabilities, solving this problem is achiev-
able, provided that there are large-scale annotated training
examples available. However, annotating for every combi-
nation of subject, object, and predicate is cumbersome, ex-
pensive, and possibly infeasible. Therefore, there is a need
for models that can learn to spatially and temporally lo-
calize subjects and objects that are connected via an un-seen predicate using only a few support set videos shar-
ing the common predicate. We address this challenging
problem, referred to as few-shot referring relationships in
videos for the first time. To this end, we pose the problem
as a minimization of an objective function defined over a
T-partite random field. Here, the vertices of the random
field correspond to candidate bounding boxes for the sub-
ject and object, and T represents the number of frames in
the test video. This objective function is composed of frame-
level and visual relationship similarity potentials. To learn
these potentials, we use a relation network that takes query-
conditioned translational relationship embedding as inputs
and is meta-trained using support set videos in an episodic
manner. Further, the objective function is minimized using
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2289
a belief propagation-based message passing on the random
field to obtain the spatiotemporal localization or subject
and object trajectories. We perform extensive experiments
using two public benchmarks, namely ImageNet-VidVRD
and VidOR, and compare the proposed approach with com-
petitive baselines to assess its efficacy.
|
1. Introduction
Consider the following problem: given a video, a vi-
sual relationship query represented as a <subject, predicate,
object >tuple, and a support set of a few videos contain-
ing the same predicate but not necessarily the same sub-
jects and objects, our objective is to spatially and tempo-
rally localize both subjects and objects that are related via
the predicate within the video. We refer to this problem as
Few-shot Referring Relationship and illustrate it in Figure 1.
Solving this problem has the potential to benefit cross-task
video understanding [41] and video retrieval [5, 7], among
other applications. Identifying its utility, referring rela-
tionship task for images has been first introduced by [15].
However, referring relationships in videos poses additional
video-specific challenges, such as understanding dynamic
visual relationships. Some of these challenges have been
addressed in recent research by Xiao et al. [35], but with
a reliance on strong supervision. Referring relationships in
videos within a few-shot setup is an under-explored area.
We aim to fill this research gap via our work.
Visual relationships inherently have long-tail distribu-
tions in any video collection. For example, Image-Net Vid-
VRD [27] dataset includes approximately 18.9%predicates
with more than 100 instances but 20.5%predicates with less
than 10 instances. This phenomenon is also shown in Fig-
ure 1, where most predicates belong to the tail side of the
distribution. The methods that work best for frequent visual
relationships do not necessarily generalize well to unseen
visual relationships. Moreover, in a real-world scenario an-
notating visual relationships for each combination of sub-
ject, object, and predicate are cumbersome, expensive, and
possibly infeasible. Therefore, there is a need to study vi-
sual relationship tasks in a few-shot setup. For instance:
only with a few examples of the fly above predicate, such as
videos containing <bird, fly above, person >,<helicopter,
fly above, train >as shown in Figure 1 (a), a model should
be able to generalize to the unseen visual relationship, such
as<plane, fly above, person >. We propose a solution for
Few-shot Referring Relationship in videos in this work.
We pose the problem of a few-shot referring relation-
ship in the video as a minimization of an objective func-
tion defined over a T-partite random field where Tis the
number of frames in the test video. Furthermore, the ver-
tices of the random field are treated as random variables
and represent candidate bounding boxes for the subject andobjects. The objective function consists of frame-level po-
tentials and visual relationship similarity potentials, both of
which are learned using a relation network that takes query-
conditioned translational relationship embeddings as inputs.
We meta-train the relation network using support set videos
in an episodic manner. Further, the objective function is
minimized using a belief propagation-based message pass-
ing on the random field to obtain subject and object tra-
jectories. We perform extensive experiments on two pub-
lic benchmarks, namely ImageNet-VidVRD [27] and Vi-
dOR [31], and report the accuracy of localizing subject,
object, and relation, denoted by Asub,Aobj, and Ar, re-
spectively, along with other popular measures used in the
literature. Our proposed approach clearly outperforms the
related baselines.
The contributions of this work are three folds. (i) We
propose a novel problem setup for referring relationship in
videos, where the model must learn to localize the subject
and object corresponding to a query visual relationship that
was unseen during training using only a few support videos.
(ii) We propose a new formulation to solve this task based
on the minimization of an objective function on T-partite
random field where Tis the number of frames in the test
video, and the vertices of the random field representing po-
tential bounding boxes for subject and objects correspond
to the random variables. (Section 3.1). (iii) Additionally, to
enrich query-conditioned relational embeddings, we present
two aggregation techniques, namely global semantic and
local localization aggregations. The use of these aggrega-
tion techniques results in enhanced relationship representa-
tions, which helps to obtain better trajectories for objects
and subjects related via the query visual relationship. This
is evidenced by extensive experiments and ablations. (Sec-
tions 3.2 and 4.5).
|
Liu_Progressive_Semantic-Visual_Mutual_Adaption_for_Generalized_Zero-Shot_Learning_CVPR_2023
|
Abstract
Generalized Zero-Shot Learning (GZSL) identifies un-
seen categories by knowledge transferred from the seen
domain, relying on the intrinsic interactions between vi-
sual and semantic information. Prior works mainly lo-
calize regions corresponding to the sharing attributes.
When various visual appearances correspond to the same
attribute, the sharing attributes inevitably introduce se-
mantic ambiguity, hampering the exploration of accurate
semantic-visual interactions. In this paper, we deploy
the dual semantic-visual transformer module (DSVTM)
to progressively model the correspondences between at-
tribute prototypes and visual features, constituting a pro-
gressive semantic-visual mutual adaption (PSVMA) net-
work for semantic disambiguation and knowledge trans-
ferability improvement. Specifically, DSVTM devises an
instance-motivated semantic encoder that learns instance-
centric prototypes to adapt to different images, enabling
the recast of the unmatched semantic-visual pair into the
matched one. Then, a semantic-motivated instance decoder
strengthens accurate cross-domain interactions between the
matched pair for semantic-related instance adaption, en-
couraging the generation of unambiguous visual represen-
tations. Moreover, to mitigate the bias towards seen classes
in GZSL, a debiasing loss is proposed to pursue response
consistency between seen and unseen predictions. The
PSVMA consistently yields superior performances against
other state-of-the-art methods. Code will be available at:
https://github.com/ManLiuCoder/PSVMA.
|
1. Introduction
Generalized Zero-Shot Learning (GZSL) [35] aims to
recognize images belonging to both seen and unseen cat-
egories, solely relying on the seen domain data. Freed
*Corresponding author
Figure 1. The embedding-based models for GZSL. (a) The early
embedding-based method. (b) Part-based methods via attention
mechanisms. (c) Semantic-guided methods. (d) Our PSVMA. A,
S,Fdenote the category attribute prototypes, sharing attributes,
and visual features, respectively. The PSVMA progressively per-
forms semantic-visual mutual adaption for semantic disambigua-
tion and knowledge transferability improvement.
from the requirement of enormous manually-labeled data,
GZSL has extensively attracted increasing attention as a
challenging recognition task that mimics human cognitive
abilities [25]. As unseen images are not available during
training, knowledge transfer from the seen to unseen do-
mains is achieved via auxiliary semantic information ( i.e.,
category attributes [15, 25], text descriptions [26, 38], and
word embedding [31, 32, 40]).
Early embedding-based methods [2,3,45,51] embed cat-
egory attributes and visual images and learn to align global
visual representations with corresponding category proto-
types, as shown in Fig. 1 (a). Nevertheless, the global
information is insufficient to mine fine-grained discrimi-
native features which are beneficial to capture the sub-
tle discrepancies between seen and unseen classes. To
solve this issue, part-based learning strategies have been
leveraged to explore distinct local features. Some works
[27, 29, 33, 47, 48, 52] apply attention mechanisms to high-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15337
light distinctive areas, as shown in Fig. 1 (b). These meth-
ods fail to develop the deep correspondence between vi-
sual and attribute features, which results in biased recog-
nition of seen classes. More recently, semantic-guided ap-
proaches (see Fig. 1 (c)) are proposed to employ the shar-
ing attribute and localize specific attribute-related regions
[8, 22, 30, 49, 50]. They establish interactions between the
sharing attributes and visual features during localization,
further narrowing the cross-domain gap. Actually, various
visual appearances correspond to the same sharing attribute
descriptor. For example, for the attribute descriptor “tail”,
the visual presentations of a dolphin’s and rat’s tail exhibit
differently. The above methods are suboptimal to build
matched visual-semantic pairs and inclined to generate am-
biguous semantic representations. Further, this semantic
ambiguity can hamper cross-domain interactions based on
unmatched visual-semantic pairs, which is detrimental to
the knowledge transferring from seen to unseen classes.
To tackle this problem, we propose a progressive
semantic-visual mutual adaption (PSVMA) network, as
shown in Fig. 1 (d), to progressively adapt the sharing at-
tributes and image features. Specifically, inspired by the
powerful ability of the vision transformers (ViT) [14] to
capture global dependencies, we apply ViT for visual em-
bedding and extract image patch features for the interac-
tion with semantic attributes. With the embedded visual
and attribute features, we devise the dual semantic-visual
transformer module (DSVTM) in PSVMA, which consists
of an instance-motivated semantic encoder (IMSE) and a
semantic-motivated instance decoder (SMID).
Concretely, in IMSE, we first perform instance-aware
semantic attention to adapt the sharing attributes to var-
ious visual features. Based on the interrelationship be-
tween attribute groups, we further introduce attribute com-
munication and activation to promote the compactness be-
tween attributes. In this way, IMSE recurrently converts
the sharing attributes into instance-centric semantic fea-
tures and recasts the unmatched semantic-visual pair into
the matched one, alleviating the problem of semantic ambi-
guity. Subsequently, SMID explores the cross-domain cor-
respondences between each visual patch and all matched
attributes for semantic-related instance adaption, providing
accurate semantic-visual interactions. Combined with the
refinement of the patch mixing and activation in SMID, the
visual representation is eventually adapted to be unambigu-
ous and discriminative. In addition, we design a novel de-
biasing loss for PSVMA to assist the process of knowledge
transfer by pursuing the distribution consistency of inferred
scores, mitigating the common bias towards seen domains.
Consequently, PSVMA can effectively achieve semantic
disambiguation and improve knowledge transferability by
progressive semantic-visual mutual adaption, gaining more
accurate inferences for both seen and unseen categories.Our key contributions can be summarized as follows: (1)
We propose a progressive semantic-visual mutual adaption
(PSVMA) network that deploys the dual semantic-visual
transformer module (DSVTM) to alleviate semantic am-
biguity and strengthen feature transferability through mu-
tual adaption. (2) The sharing attributes are converted into
instance-centric attributes to adapt to different visual im-
ages, enabling the recast of the unmatched semantic-visual
pair into the matched one. Furthermore, accurate cross-
domain correspondence is constructed to acquire transfer-
able and unambiguous visual features. (3) Extensive ex-
periments over common benchmarks demonstrate the effec-
tiveness of our PSVMA with superior performance. Partic-
ularly, our method achieves 75.4% for the harmonic mean
on the popular benchmark AwA2, outperforming previous
competitive solutions by more than 2.3%.
|
Liang_Unknown_Sniffer_for_Object_Detection_Dont_Turn_a_Blind_Eye_CVPR_2023
|
Abstract
The recently proposed open-world object and open-set
detection have achieved a breakthrough in finding never-
seen-before objects and distinguishing them from known
ones. However, their studies on knowledge transfer from
known classes to unknown ones are not deep enough, result-
ing in the scanty capability for detecting unknowns hidden
in the background. In this paper, we propose the unknown
sniffer (UnSniffer) to find both unknown and known objects.
Firstly, the generalized object confidence (GOC) score is
introduced, which only uses known samples for supervision
and avoids improper suppression of unknowns in the back-
ground. Significantly, such confidence score learned from
known objects can be generalized to unknown ones. Addi-
tionally, we propose a negative energy suppression loss to
further suppress the non-object samples in the background.
Next, the best box of each unknown is hard to obtain during
inference due to lacking their semantic information in train-
ing. To solve this issue, we introduce a graph-based deter-
mination scheme to replace hand-designed non-maximum
suppression (NMS) post-processing. Finally, we present
the Unknown Object Detection Benchmark, the first pub-
licly benchmark that encompasses precision evaluation for
unknown detection to our knowledge. Experiments show
that our method is far better than the existing state-of-the-
art methods. Code is available at: https://github.
com/Went-Liang/UnSniffer .
|
1. Introduction
Detecting objects with a limited number of classes in the
closed-world setting [2,3,14,20,21,23,31–33,46] has been
the norm for years. Recently, the popularity of autonomous
†Equal Contribution
*Corresponding Author
This work was supported by the national key R & D program inter-
governmental international science and technology innovation cooperation
project 2021YFE0101600.
(d)T-SNE(e)Scoreforobjectsandnon-object(f)Scoreforknownandunknown7121748131825914191361015201116Non-objectUnknownDistributionDistribution
(a)VOS
(b)ORE
(c)Ours
Non-objectKnownUnknownNon-objectKnownUnknownFigure 1. (a)-(c) the predicted unknown (blue), known (yellow),
and missed (red) objects of VOS [9], ORE [18], and our model.
(d) t-SNE visualization of various classes’ hidden vectors. (e)
score for objects and non-object (generalized object confidence).
(f) score for unknown, known, and non-object (negative energy).
driving [4, 7, 17, 25, 29, 30, 38, 39, 43–45] has raised the bar
for object detection. That is, the detector should detect both
known and unknown objects. ‘ Known Objects ’ are those
that belong to pre-defined categories, while ‘ Unknown Ob-
jects ’ are those that the detector has never seen during train-
ing. Detecting unknown objects is crucial in coping with
more challenging environments, such as autonomous driv-
ing scenes with potential hazards.
Since unknown objects do not have labels in the train-
ing set, how to learn knowledge that can be generalized to
unknown classes from finite pre-defined categories is the
key issue in detecting unknown objects. In recent years,
a series of groundbreaking works have been impressive on
open-set detection (OSD) [8, 9, 11, 28] and open-world ob-
ject detection (OWOD) [12,18,37, 40]. Several OSD meth-
ods have used uncertainty measures to distinguish unknown
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3230
objects from known ones. However, they primarily focus
on improving the discriminatory power of uncertainty and
tend to suppress non-objects along with many potential un-
knowns in the training phase. As a result, these meth-
ods miss many unknown objects. Fig. 1 (a) shows that
VOS [9] misses many unknown objects, such as bags, stalls
and surfboards. Furthermore, OWOD requires generating
high-quality boxes for both known and unknown objects.
ORE [18] and OW-DETR [12] collect the pseudo-unknown
samples by an auto-labelling step for supervision and per-
form knowledge transfer from the known to the unknown
by contrastive learning or foreground objectness. But the
pseudo-unknown samples are unrepresentative of the un-
known objects, thus limiting the model’s ability to describe
unknowns. Fig. 1 (b) shows that ORE [18] mis-detects
many unknown objects, even though some are apparent.
In philosophy, there is a concept called ‘ Analogy ’ [34],
which describes unfamiliar things with familiar ones. We
argue that despite being ever-changing in appearance, the
unknown objects are often visually similar to the objects
of pre-defined classes , as observed in Fig. 1 (d). The t-
SNE visualization shows that the unknown objects tend to
be among several pre-defined classes, while the non-objects
are far away from them. This inspires us to express a unified
concept of ‘object’ by the proposed generalized object con-
fidence (GOC) score learned from the known objects only.
To this end, we first discard the background bounding boxes
and only collect the object-intersected boxes for training to
prevent potential unknown objects from being classified as
backgrounds. Then, a combined loss function is designed
to enforce the detector to assign relatively higher scores to
boxes tightly enclosing objects. Unlike ‘objectness’, non-
object boxes are not used as the negative samples for su-
pervision. Fig. 1 (e) shows that the GOC score distinctly
separates non-objects and ‘objects’. In addition, we design
a negative energy suppression loss on top of VOS’s energy
calculation [9] to further widen the gap between the non-
object and the ‘object’. Fig. 1 (f) shows three distinct peaks
for the knowns, unknowns and non-objects. Next, due to the
absence of the unknown’s semantic information in training,
the detector hardly determines the best bounding box by a
constant threshold when the number of objects cannot be
predicted ahead of time. In our model, the best box determi-
nation is modelled as a graph partitioning problem, which
adaptively clusters high-score proposals into several groups
and selects one from each group as the best box.
As far as we know, the existing methods are evaluated on
the COCO [22] and Pascal VOC benchmarks [10] that do
not thoroughly label unknown objects. Therefore, the accu-
racy of unknown object detection cannot be evaluated. Mo-
tivated by this practical need, we propose the Unknown Ob-
ject Detection Benchmark (UOD-Benchmark), which takes
the VOC’s training set as the training data and contains twotest sets. (1) COCO-OOD containing objects with the un-
known class only; (2) COCO-Mix with both unknown and
known objects. They are collected from the original COCO
dataset [22] and annotated according to the COCO’s in-
stance labeling standard. In addition, the Pascal VOC test-
ing set is employed for evaluating known object detection.
Our key contributions can be summarized as follows:
• To better separate non-object and ‘object’, we propose
the GOC score learned from known objects to express
unknown objects and design the negative energy sup-
pression to further limit non-object.
• The graph-based box determination is designed to
adaptively select the best bounding box for each object
during inference for higher unknown detection preci-
sion.
• We propose the UOD-Benchmark containing annota-
tion of both known and unknown objects, enabling us
to evaluate the precision of unknown detection. We
comprehensively evaluate our method on this bench-
mark which facilitates future use of unknown detection
in real-world settings.
|
Liu_SynthVSR_Scaling_Up_Visual_Speech_Recognition_With_Synthetic_Supervision_CVPR_2023
|
Abstract
Recently reported state-of-the-art results in visual
speech recognition (VSR) often rely on increasingly large
amounts of video data, while the publicly available tran-
scribed video datasets are limited in size. In this paper, for
the first time, we study the potential of leveraging synthetic
visual data for VSR. Our method, termed SynthVSR, sub-
stantially improves the performance of VSR systems with
synthetic lip movements. The key idea behind SynthVSR is
to leverage a speech-driven lip animation model that gen-
erates lip movements conditioned on the input speech. The
speech-driven lip animation model is trained on an unla-
beled audio-visual dataset and could be further optimized
towards a pre-trained VSR model when labeled videos are
available. As plenty of transcribed acoustic data and face
images are available, we are able to generate large-scale
synthetic data using the proposed lip animation model for
semi-supervised VSR training. We evaluate the perfor-
mance of our approach on the largest public VSR bench-
mark - Lip Reading Sentences 3 (LRS3). SynthVSR achieves
a WER of 43.3% with only 30 hours of real labeled data,
∗Work done during an internship at Meta AI.outperforming off-the-shelf approaches using thousands of
hours of video. The WER is further reduced to 27.9% when
using all 438 hours of labeled data from LRS3, which is
on par with the state-of-the-art self-supervised AV-HuBERT
method. Furthermore, when combined with large-scale
pseudo-labeled audio-visual data SynthVSR yields a new
state-of-the-art VSR WER of 16.9% using publicly available
data only, surpassing the recent state-of-the-art approaches
trained with 29 times more non-public machine-transcribed
video data (90,000 hours). Finally, we perform extensive
ablation studies to understand the effect of each component
in our proposed method.
|
1. Introduction
Visual speech recognition (VSR), also known as lip read-
ing, is the task of recognizing speech content based on vi-
sual lip movements. VSR has a wide range of applica-
tions in real-world scenarios such as helping the hearing-
impaired perceive human speech and improving automatic
speech recognition (ASR) in noisy environments.
VSR is a challenging task, as it requires capturing speech
from high-dimensional spatio-temporal videos, while mul-
tiple words are visually ambiguous (e.g., “world” and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18806
“word”) in the visual streams. Recently, with the release of
large-scale transcribed audio-visual datasets such as LRS2
[1] and LRS3 [2], deep neural networks have become the
mainstream approach for VSR. However, even the largest
public dataset for English VSR, LRS3, does not exceed 500
hours of transcribed video data. The lack of large-scale
transcribed audio-visual datasets potentially results in VSR
models which could only work in a laboratory environment
i.e. limited word vocabulary and lip sources diversity [28].
A common solution to this issue is to collect and annotate
large-scale audio-visual datasets. For example, [41,42] col-
lected 90,000 hours of YouTube videos with user-uploaded
transcriptions to achieve state-of-the-art performance on
standard benchmarks. However, such a procedure is expen-
sive and time-consuming, especially for most of the world’s
7,000 languages [43]. If annotations are missing, the ASR
can be used to generate the transcriptions automatically and
this has been shown to be an effective approach to signifi-
cantly improve VSR performance [28]. The other promis-
ing direction is to learn audio-visual speech representations
from large amounts of parallel unlabeled audio-visual data
in a self-supervised approach, and then fine-tune them on
the limited labeled video dataset [43]. Nevertheless, pub-
licly available video datasets are also limited and their us-
age may raise license-related1concerns, barring their use in
commercial applications.
Human perception of speech is inherently multimodal,
involving both audition and vision [43]. ASR, which is a
complementary task to VSR, has achieved impressive per-
formance in recent years, with tens of thousands of hours
of annotated speech datasets [4, 20, 33] available for large-
scale training. It is intuitive to ask: Can we improve VSR
with large amounts of transcribed acoustic-only ASR train-
ing data? The key to this question is to take advantage
of recent advances in speech-driven visual generative mod-
els [49,50]. By leveraging visual generative models, we can
produce parallel synthetic videos for large-scale labeled au-
dio datasets. Synthetic videos provide advantages such as
having control over the target text and lip image as well as
the duration of a generated utterance. To the best of our
knowledge, the potential of leveraging synthetic visual data
for improving VSR has never been studied in the literature.
In this work, we present SynthVSR, a novel semi-
supervised framework for VSR. In particular, we first pro-
pose a speech-driven lip animation model that can generate
synthetic lip movements video conditioned on the speech
content. Next, the proposed lip animation model is used
to generate synthetic video clips from transcribed speech
datasets (e.g., Librispeech [33]) and human face datasets
(e.g., CelebA [23]). Then, the synthetic videos together
with the corresponding transcriptions are used in combi-
1Such as LRW [11] and LRS2 [1] datasets which are only permitted for
the purpose of academic research.nation with the real video-text pairs (e.g., LRS3 [2]) for
large-scale semi-supervised VSR training. The pipeline of
SynthVSR is illustrated in Fig. 1. Unlike existing studies
in exploiting unlabeled video data for VSR using meth-
ods such as pseudo-labeling [28] and self-supervised learn-
ing [43], we use the unlabeled audio-visual data to train a
cross-modal generative model in order to bridge ASR train-
ing data and VSR. Furthermore, we propose to optimize the
lip animation model towards a pre-trained VSR model when
labeled videos are available. We empirically demonstrate
that the semantically high level, spatio-temporal supervi-
sion signal from the pre-trained VSR model offers the lip
animation model more accurate lip movements.
SynthVSR achieves remarkable performance gains with
labeled video data of different scales. We evaluate the
performance of SynthVSR on the largest public VSR
benchmark LRS3 with a Conformer-Transformer encoder-
decoder VSR model [28]. In the low-resource setup us-
ing only 30 hours of labeled video data from LRS3 [2],
our approach achieves a VSR WER of 43.3%, substantially
outperforming the former VSR methods using hundreds or
thousands of hours of video data for supervised training
[1,27,37,44,52] and self-supervised learning [3,26,43]. No-
tably, we demonstrate the first successful attempt that trains
a VSR model with considerable WER performance using
only 30 hours of real video data. Using the complete 438
hours from LRS3 further improves WER to 27.9% which is
on par with the state-of-the-art self-supervised method A V-
HuBERT-LARGE [43] that uses external 1,759 hours of un-
labeled audio-visual data, but with fewer model parameters.
Furthermore, following a recent high-resource setup [25]
which uses additional 2,630 hours of ASR pseudo-labeled
publicly available audio-visual data, our proposed method
yields a new state-of-the-art VSR WER of 16.9%, surpass-
ing the former state-of-the-art approaches [41, 42] trained
on 90,000 hours of non-public machine-transcribed data.
Finally, we present extensive ablation studies to analyze
where the improvement of SynthVSR comes from (e.g., the
diversity of lip sources, the scale of ASR data). We also
show considerable VSR improvement using synthetic video
data derived from Text-To-Speech (TTS)-generated speech,
indicating the great potential of our method for VSR.
|
Lin_Multimodality_Helps_Unimodality_Cross-Modal_Few-Shot_Learning_With_Multimodal_Models_CVPR_2023
|
Abstract
The ability to quickly learn a new task with minimal in-
struction – known as few-shot learning – is a central as-
pect of intelligent agents. Classical few-shot benchmarks
make use of few-shot samples from a single modality, but
such samples may not be sufficient to characterize an entire
concept class. In contrast, humans use cross-modal infor-
mation to learn new concepts efficiently. In this work, we
demonstrate that one can indeed build a better visual dog
classifier by reading about dogs and listen ing to them bark.
To do so, we exploit the fact that recent multimodal founda-
tion models such as CLIP are inherently cross-modal, map-
ping different modalities to the same representation space.
Specifically, we propose a simple cross-modal adaptation
approach that learns from few-shot examples spanning dif-
ferent modalities. By repurposing class names as additional
one-shot training samples, we achieve SOTA results with an
embarrassingly simple linear classifier for vision-language
adaptation. Furthermore, we show that our approach can
benefit existing methods such as prefix tuning, adapters, and
classifier ensembling. Finally, to explore other modalities
beyond vision and language, we construct the first (to our
knowledge) audiovisual few-shot benchmark and use cross-
modal training to improve the performance of both image
and audio classification. Project site at link.
|
1. Introduction
Learning with minimal instruction is a hallmark of hu-
man intelligence [ 86,91,98], and is often studied under the
guise of few-shot learning. In the context of few-shot visual
classification [ 18,20,29,46,79,82], a classifier is first pre-
trained on a set of base classes to learn a good feature repre-
sentation and then adapted or finetuned on a small amount
of novel class data. However, such few-shot setups often
face an inherent ambiguity – if the training image contains a
golden retriever wearing a hat, how does the learner know if
*Equal contribution.
Figure 1. Human perception is internally cross-modal. When
we perceive from one modality (such as vision), the same neu-
rons will be triggered in our cerebral cortex as if we are perceiv-
ing the object from other modalities (such as language and au-
dio) [ 24,67,70]. This phenomenon grants us a strong ability to
learn from a few examples with cross-modal information [ 52,67].
In this work, we propose to leverage cross-modality to adapt mul-
timodal models (such as CLIP [ 81] and AudioCLIP [ 27]), that en-
code different modalities to the same representation space.
the task is to find dogs ,golden retrievers , or even
hats ? On the other hand, humans have little trouble under-
standing and even generalizing from as few as one example.
How so?
We argue that humans make use of multimodal sig-
nals and representations ( Figure 1 ) when learning concepts.
For example, verbal language has been shown to help tod-
dlers better recognize visual objects given just a few ex-
amples [ 42,90]. Indeed, there exists ample evidence from
neuroscience suggesting that cognitive representations are
inherently multimodal. For instance, visual images of a
person evoke the same neurons as the textual strings of the
person’s name [ 80] and even audio clips of that person talk-
ing [70]. Even for infants as young as 1-5 months old, there
is a strong correspondence between auditory-visual [ 52] as
well as visual-tactile signals [ 67]. Such cross-modal or
inter-modal representations are fundamental to the human
perceptual-cognitive system, allowing us to understand new
concepts even with few examples [ 24].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19325
Cross-modal adaptation (our approach). In this paper,
we demonstrate that cross-modal understanding of different
modalities (such as image-text or image-audio) can improve
the performance of individual modalities. That is, reading
about dogs and listen ing to them bark can help build a better
visual classifier for them! To do so, we present a remark-
ably simple strategy for cross-modal few-shot adaptation:
we treat examples from different modalities as additional
few-shot examples . For example, given the “1-shot” task
of learning a dogclassifier, we treat both the textual dog
label and the single visual image as training examples for
learning a (visual) dog classifier. Learning is straightfor-
ward when using frozen textual and visual encoders, such
as CLIP [ 81], that map different modalities to the same rep-
resentational space. In essence, we have converted the “n-
shot” problem to a “(n+1)-shot” problem ( Figure 2 )! We
demonstrate that this basic strategy produces SOTA results
across the board with a simple linear classifier, and can be
applied to existing finetuning methods [ 100,111,113] or ad-
ditional modalities (e.g. audio).
Why does it work? From one perspective, it may not
be surprising that cross-modal adaptation improves accu-
racy, since it takes advantage of additional training exam-
ples that are “hidden” in the problem definition, e.g. a
label name [ 104] or an annotation policy [ 68] for each
class. However, our experiments demonstrate that multi-
modal cues are often complementary since they capture dif-
ferent aspects of the underlying concept; a doglabel paired
with a single visual example is often more performant than
two images! For example, Figure 3 demonstrates a one-
shot example where the target concept is ambiguous, but
becomes clear once we add information from other modali-
ties like language and sound.
Multimodal adaptation (prior art). In contrast to our
cross-modal approach, most prior works simply follow the
popular practice of finetuning uni-modal foundation mod-
els, such as large vision [ 12,31,32] or language mod-
els [8,17,62]. For example, CoOp [ 113] and other prompt-
ing methods [ 63,112,114] finetune CLIP via prefix tuning to
replace hand-engineered prompts such as "a photo of
a{cls}"with learned word tokens. Similarly, inspired
by parameter-efficient tuning of language models [ 39],
adapter-based methods [ 21,111] finetune CLIP by inserting
lightweight multi-layer-perceptrons (MLPs). However, we
aim to study the fundamental question of how to finetune
multi -modal (as opposed to uni-modal) models. A crucial
difference between prior art and ours is the use of textual in-
formation, as all existing methods [ 41,100,111,113] repur-
pose additional text features as classifier weights instead of
training samples . We demonstrate in this paper that cross-
modal adaptation is not only more performant but can also
benefit prior uni-modal approaches.
Problem setup. We begin by replicating the existing
Figure 2. Adding additional modalities helps few-shot learn-
ing. Adding textual labels to a 2-shot cat-vs-dog classification
task leads to better test performance (by turning the problem into
a 3-shot cross-modal task!). We visualize cross-modal CLIP [ 21]
features (projection to 2D with principal component analysis) and
the resulting classifier learned from them, and observe a large shift
in the decision boundary. See Figure 5 for more examples.
evaluation protocol of other works [ 81,111,113] on few-
shot adaptation of vision-language models, and report per-
formance on 11 diverse downstream datasets. We produce
state-of-the-art accuracy with an embarrassingly simple lin-
ear classifier that has access to additional “hidden” train-
ing examples in the form of textual labels, resulting in a
system that is far more lightweight than prior art. Interest-
ingly, we show that existing approaches [ 100,111,113], de-
spite already repurposing text features as classifier weights,
can still benefit from cross-modal learning. Finally, we ex-
tend our work to the audio domain by taking advantage of
AudioCLIP [ 27] that maps audio to the same frozen CLIP
representation space. We construct the first (to our knowl-
edge) cross-modal few-shot learning benchmark with audio
by intersecting ImageNet [ 15] and the ESC-50 audio clas-
sification dataset [ 77]. We show that cross-modal audiovi-
sual learning helps for both downstream image and audio
classification; in summary, one cantrain better dog image
classifiers by listening to them bark!
|
Li_FCC_Feature_Clusters_Compression_for_Long-Tailed_Visual_Recognition_CVPR_2023
|
Abstract
Deep Neural Networks (DNNs) are rather restrictive in
long-tailed data, since they commonly exhibit an under-
representation for minority classes. Various remedies have
been proposed to tackle this problem from different perspec-
tives, but they ignore the impact of the density of Back-
bone Features (BFs) on this issue. Through representation
learning, DNNs can map BFs into dense clusters in fea-
ture space, while the features of minority classes often show
sparse clusters. In practical applications, these features are
discretely mapped or even cross the decision boundary re-
sulting in misclassification. Inspired by this observation, we
propose a simple and generic method, namely Feature Clus-
ters Compression (FCC), to increase the density of BFs by
compressing backbone feature clusters. The proposed FCC
can be easily achieved by only multiplying original BFs by
a scaling factor in training phase, which establishes a lin-
ear compression relationship between the original and mul-
tiplied features, and forces DNNs to map the former into
denser clusters. In test phase, we directly feed original fea-
tures without multiplying the factor to the classifier, such
that BFs of test samples are mapped closer together and do
not easily cross the decision boundary. Meanwhile, FCC
can be friendly combined with existing long-tailed methods
and further boost them. We apply FCC to numerous state-
of-the-art methods and evaluate them on widely used long-
tailed benchmark datasets. Extensive experiments fully ver-
ify the effectiveness and generality of our method. Code is
available at https://github.com/lijian16/FCC.
|
1. Introduction
Recently, Deep Neural Networks (DNNs) have achieved
considerable success in a variety of visual tasks, such as ob-
ject detection [5,12] and visual recognition [18,38]. Despite
with ingenious networks and powerful learning capabilities,
the great success is inseparable from large-scale balanced
*Corresponding author
Figure 1. (a) DNNs can map backbone features into different clus-
ters, while minority classes are mapped into sparse clusters com-
pared to majority classes. The sparsity causes boundary points
mapped far from their clusters or even cross the boundary. (b) FCC
can compress original features compared with multiplied features,
which makes these features are mapped closer together. Because
the decision boundary remains unchanged in test phase, boundary
points will be brought back within the boundary.
datasets, such as ImageNet [8]. However, datasets collected
from real-world scenarios normally follow imbalanced and
long-tailed distributions, in which few categories (majority
classes) occupy most of the data while many categories (mi-
nority classes) are under-represented [15]. DNNs trained
on such datasets commonly exhibit a bias towards over-
represented majority classes and produce low recognition
accuracy for minority classes [19, 24, 33].
A number of remedies have been proposed to deal with
this problem from different perspectives. For instance,
re-sampling methods aim to balance the data distribution
by designing different sampling strategies [13, 23]. Re-
weighting methods attempt to alleviate the dominance of
majority classes by assigning different weights to each class
[1, 3, 21]. Two-stage training methods divide the vanilla
training procedure into imbalanced learning and balanced
fine-tuning [11, 19]. More recently, multi-expert networks
have been proposed to employ multiple models to learn rep-
resentation from different aspects [32, 37]. These methods
show the outstanding performance on long-tailed recogni-
tion, but they ignore the impact of the density of Backbone
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24080
Features (BFs) on this issue.
DNNs primarily consist of the backbone networks and
classifiers [10, 18]. The former, similar to unsupervised
clustering algorithms [9, 20], can map BFs into different
clusters by linear and non-linear operations ( e.g., convolu-
tion kernel and activation functions). The latter is similar
to SVM [2], which can draw the decision boundary based
on these clusters for recognition [7, 14]. DNNs trained on
long-tailed datasets tend to map BFs of majority classes into
dense clusters, but those of minority classes are mapped into
sparse clusters, in which feature points are far from each
other [41, 42]. In practical application, due to the sparsity,
BFs of real samples cannot be mapped close enough to each
other and are scattered in feature space, such that these fea-
ture points, especially boundary points that located at the
margin of clusters, easily cross the decision boundary re-
sulting in poor performance, as shown in Fig. 1a.
In view of this observation, we propose a Feature Clus-
ters Compression (FCC) to increase the density of BFs by
compressing backbone feature clusters. Our method can be
easily achieved by only multiplying original BFs by a spe-
cific scaling factor τ(τ > 1) and feeding the multiplied
features to the classifier for training, which will establish
a linear compression relationship among the original and
multiplied features. As the multiplied features are mapped
into clusters in training process, this relationship will force
the backbone to map the original features into denser clus-
ters, thereby improving the density. In test phase, we di-
rectly feed the original BFs without multiplying the factor
τto the trained classifier, these features are mapped closer
together, and because the decision boundary remains con-
sistent with training phase, boundary points will be brought
back within the decision boundary resulting in performance
improvement, as shown in Fig. 1b. We notice that the input
of the classifier is different in training and test phases, and
this problem will be discussed in Sec. 3.2.
As reported by [38], different long-tailed methods might
hurt each other when they are employed inappropriately.
For instance, applying re-sampling and re-weighting meth-
ods simultaneously might obtain similar or even worse ac-
curacy than using them alone since they both try to enlarge
the influence of minority classes. But our FCC is a generic
method which only focuses on optimizing BFs without con-
flicting with other modules ( e.g., sampling strategies and
loss functions), and it can be friendly combined with exist-
ing long-tailed methods and further boost them. Our contri-
butions can be summarized as follows:
• We tackle long-tailed visual recognition from a novel
perspective of increasing the density of BFs, which
makes features mapped into denser clusters and bound-
ary points brought back within the decision boundary.
• We propose a Feature Clusters Compression (FCC)to improve the density of BFs, and it can be easily
achieved and friendly combined with existing long-
tailed methods to boost them.
• Extensive experiments demonstrate that FCC applied
to existing methods achieves significant performance
improvement and state-of-the-art results on four popu-
lar datasets, including CIFAR-10-LT, CIFAR-100-LT,
ImageNet-LT and iNaturalist 2018.
|
Liu_CIGAR_Cross-Modality_Graph_Reasoning_for_Domain_Adaptive_Object_Detection_CVPR_2023
|
Abstract
Unsupervised domain adaptive object detection (UDA-
OD) aims to learn a detector by generalizing knowledge
from a labeled source domain to an unlabeled target do-
main. Though the existing graph-based methods for UDA-
OD perform well in some cases, they cannot learn a proper
node set for the graph. In addition, these methods build the
graph solely based on the visual features and do not con-
sider the linguistic knowledge carried by the semantic pro-
totypes, e.g., dataset labels. To overcome these problems,
we propose a cross-modality graph reasoning adaptation
(CIGAR) method to take advantage of both visual and lin-
guistic knowledge. Specifically, our method performs cross-
modality graph reasoning between the linguistic modality
graph and visual modality graphs to enhance their repre-
sentations. We also propose a discriminative feature selec-
tor to find the most discriminative features and take them as
the nodes of the visual graph for both efficiency and effec-
tiveness. In addition, we employ the linguistic graph match-
ing loss to regulate the update of linguistic graphs and
maintain their semantic representation during the training
process. Comprehensive experiments validate the effective-
ness of our proposed CIGAR.
|
1. Introduction
Object detection is a fundamental technique in computer
vision tasks, and it has been widely explored in many ap-
plications, e.g., self-driving and public safety. A variety of
works [31,38,39,56,57] have achieved improvements in de-
tection performance due to the development of deep neural
networks. However, a detector significantly degrades if we
deploy it in a novel domain due to the problem of domain
shift. The domain shift can be induced by many factors,
* Corresponding authors.
TargetData
SourceData
PersonBicycleCar⋮BusSemanticPrototypesDiscriminative Feature SelectorVisualGraphProjection
LinguisticGraphProjectionVisualGraphProjectionSourceVisualGraph
TargetVisualGraphLinguisticGraphCross-Modality Graph ReasoningFigure 1. Illustration of the proposed Cross-modality Graph Rea-
soning Adaptation (CIGAR) framework.
such as the variation of the capture condition from sunny
to foggy weather, from virtual to real-world, and from one
camera to another.
To deal with the problem of domain shift [9], researchers
have proposed many unsupervised domain adaptive object
detection (UDA-OD) methods to bridge the domain gap be-
tween the source and target domains. Among them, the
self-training based methods [5, 10, 26, 35] have shown ex-
cellent performances. However, they cannot be easily ex-
tended to real applications because they are computationally
expensive and inefficient. Feature alignment-based meth-
ods [3,4,6,22,25] have also been extensively studied. These
methods are structurally elegant and can be categorized into
three groups, including global-level alignment, instance-
level alignment, and category-level alignment. The global-
level alignment methods [6,41] align the whole shallow fea-
ture maps produced by a backbone network. The instance-
level alignment methods [4,17] extract the feature maps for
all the instances and learn to achieve cross-domain align-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23776
ment in the deep feature space. The category-level align-
ment methods [58,61] normally first use the detectors or an
additional classification model for pseudo label assignment
to the target samples, then align the category-wise instance
features of two domains based on the ground-truth source
labels and the pseudo target labels.
Numerous works [3, 24, 25] achieve feature alignment
using graph-based approaches. These methods take dense
features as nodes to construct the graphs and investigate the
relationship between nodes. They use the knowledge car-
ried by the graphs to enhance the features for the purpose
of cross-domain alignment and thus improve the detection
performance. Though the graph-based methods have sig-
nificantly improved the feature alignment, they still have
two inherent limitations. First, the existing methods do not
construct the graph with the proper node set. Many graph-
based methods randomly or uniformly [25] select features
to construct graphs, resulting in the missing of some dis-
criminative features. Some other methods [3] are not robust
against noise and are computationally expensive, as they
take all features as the graph nodes. Secondly, they only
explore the visual knowledge extracted from the images. In
this way, they ignore the critical knowledge of the linguis-
tic modality, which carries the semantic prototypes of the
domains, e.g., linguistic dataset labels. Linguistic modality
knowledge is very effective in regulating visual knowledge,
and its absence severely reduces the representative ability
of the resulting features. Some existing works [18,63] have
focused on using semantic category information to enhance
performances in vision tasks. Singh et al. [46] also used
a language model for the semi-supervised domain adaptive
task and achieved improved performance.
To overcome the two limitations mentioned above,
we propose a Cross-modal ItyGrAphReasoning Adapta-
tion (CIGAR) framework for category-level alignment via
graph-based learning, as shown in Fig. 1. To enhance ef-
ficiency and improve the robustness against noise, we pro-
pose a Discriminative Feature Selector (DFS) for finding
discriminative features and constructing the visual graph us-
ing only the discriminative features. In particular, we first
conduct a procedure of singular value decomposition (SVD)
and drop the small singular values, then evaluate the infor-
mation richness of each feature via the summation of the ab-
solute value of the elements. We can improve the represen-
tation ability of visual graphs by only taking these discrim-
inative features as the nodes. Our method is more computa-
tionally efficient than previous methods, which use all im-
age features to construct graphs. Our CIGAR also explores
the graph in the linguistic modality and performs cross-
modality graph reasoning between the linguistic modality
and the visual modality. The linguistic modality knowledge
can guide the mapping of visual modality knowledge from
different domains to the same feature space. Our CIGARcan build a graph not only for the tasks with multiple cat-
egories and capture the relationship between different cate-
gories but also for the tasks with a single category and cap-
ture the relationship between different components of a sin-
gle category. We maintain the semantic representation of
the knowledge in linguistic modality and use it to guide the
training procedure.
We summarize our contributions as follows:
• We propose a Cross-modality Graph Reasoning Adap-
tation (CIGAR) method for the domain adaptive object
detection problem. To the best of our knowledge, this
is the first work to tackle the UDA-OD task by graph
reasoning across different modalities.
• We propose a Discriminative Feature Selector for find-
ing discriminative image features and efficiently con-
structing the representative visual graph.
• Extensive experiments are conducted on four adapta-
tion tasks, and our CIGAR achieves state-of-the-art
performance, outperforming existing works by a large
margin.
|
Li_Discrete_Point-Wise_Attack_Is_Not_Enough_Generalized_Manifold_Adversarial_Attack_CVPR_2023
|
Abstract
Classical adversarial attacks for Face Recognition (FR)
models typically generate discrete examples for target iden-
tity with a single state image. However, such paradigm of
point-wise attack exhibits poor generalization against nu-
merous unknown states of identity and can be easily de-
fended. In this paper, by rethinking the inherent relationship
between the face of target identity and its variants, we in-
troduce a new pipeline of Generalized Manifold Adversarial
Attack (GMAA)1to achieve a better attack performance by
expanding the attack range. Specifically, this expansion lies
on two aspects – GMAA not only expands the target to be
attacked from one to many to encourage a good general-
ization ability for the generated adversarial examples, but
it also expands the latter from discrete points to manifold
by leveraging the domain knowledge that face expression
change can be continuous, which enhances the attack ef-
fect as a data augmentation mechanism did. Moreover, we
further design a dual supervision with local and global con-
straints as a minor contribution to improve the visual qual-
ity of the generated adversarial examples. We demonstrate
the effectiveness of our method based on extensive experi-
ments, and reveal that GMAA promises a semantic contin-
uous adversarial space with a higher generalization ability
and visual quality.
|
1. Introduction
Thanks to the rapid development of deep neural net-
works (DNNs), the face recognition (FR) networks [5, 27]
has been applied to identity security identification systems
in a variety of crucial applications, such as face unlocking
and face payment on smart devices. However, it has been
observed that DNNs are easily fooled by adversarial exam-
ples and offer incorrect assessments [9, 23], which has risks
*Co-first author
†Corresponding author
1https://github.com/tokaka22/GMAAof unauthorized access to FR systems and stealing personal
privacy through poisoned data. These ‘well-designed’ ad-
versarial examples reveal the aspects of FR models that are
vulnerable to be attacked, which makes the adversarial at-
tack a meaningful work to provide reference for improving
model robustness.
Following the point-wise paradigm, previous methods
typically tend to attack a single target identity sample with
discrete adversarial examples illustrated in Fig. 1. However,
these methods are not strong enough both in target domain
and adversarial domain. Concretely, for the target domain,
attacking a single identity image has a poor generalization
on those unseen faces (even if they belong to the same per-
son) in realistic scenarios. For example, Fig. 2shows these
adversarial examples which were used to attack the target
(a girl) have a disappointingly lower success rate of attack-
ing the identity with other unseen states. We analyze that’s
because attacking a single image of target identity overfits
the generation of adversarial examples to some fixed factors
such as expression, makeup style, etc. In addition to the
target domain, we naturally consider the weakness of ad-
versarial domain in the existing methods. Most adversarial
attack methods optimize a Lpbounded perturbation [24, 34]
based on the gradient, which limits the problem to search-
ing for discrete adversarial examples in a hypersphere of the
clean sample and ignores the continuity of the generated ad-
versarial domain. For example, the recently proposed meth-
ods based on makeup style transformation [16, 35,37] focus
on generating finite adversarial examples that correspond
to discrete makeup references. Such methods all overem-
phasize on mining discrete adversarial examples within a
limited scope and ignore the importance of the continuity
in adversarial space. The weakness of current adversarial
attack tasks motivates us, on the one hand, to explore how
to generate adversarial examples that are more general to
various target identity’s states and, on the other hand, to up-
grade adversarial domain from discrete points to continuous
manifold for a stronger attack.
In this paper, we introduce a new paradigm dubbed Gen-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20575
Figure 1. The core idea comparison. Discrete point-wise attack methods leverage a single state of the target identity during training
and provide discrete adversarial examples. By attacking the target identity’s state set and employing domain knowledge, our core idea
aims to Generalized Manifold Adversarial Attack (GMAA), which promises a semantic continuous adversarial manifold with a higher
generalization ability on the target identity with unknown state.
Figure 2. The black-box attack success rate on the Mobileface of
attacking target * 1, 2 and 3 during the testing. The three methods
are exclusively trained on target *.
eralized Manifold Adversarial Attack (GMAA) depicted in
Fig. 1, which achieves a higher attack success rate by pro-
viding semantic continuous adversarial domains while ex-
panding the target to be attacked from an instance to a group
set. Specifically, we train adversarial examples to attack
a target identity set rather than a single image, which in-
creases the attack success rate on the target identity with
different states. The expansion of target domain naturally
prompts us to consider enhancing the adversarial domain.
Inspired by the success of some recent data and knowl-
edge dual driven methods [2, 18, 36], we explore a low-
dimensional manifold near the sample according to the do-
main knowledge, which is a simple yet highly-efficient con-
tinuous embedding scheme and can be used to augment the
data. For such manifold, the data in it share the visual iden-
tity same as the original sample and also lie in the decisionboundaries of target identity. More specifically, we employ
the Facial Action Coding System (FACS) [10] as prior do-
main knowledge (a kind of instantiation) to expand adver-
sarial domain from discrete points to manifold. Through
using FACS, the expressions can be encoded into a vec-
tor space, and by which the adversarial example generator
could produce an manifold that is homogeneous with the
expression vector space and possesses semantic continuity.
In addition, in order to build an adversarial space with high
visual quality, we employ four expression editors in GMAA
pipeline to supervise the adversarial example generation in
terms of global structure and local texture details. A trans-
ferability enhancement module is also introduced to drive
the model to mine robust and transferable adversarial fea-
tures. Extensive experiments have shown that these compo-
nents work well on a wide range of baselines and black-box
attack models.
Our contributions are summarized as follows.
•We first pinpoint that the popular adversarial at-
tack methods generally face generalization difficulty
caused by the limited point-wise attack mechanism.
•To enhance the performance of adversarial attack, a
new paradigm of Generalized Manifold Adversarial
Attack (GMAA) is proposed with an improved attack
success rate and better generalization ability.
•GMAA considers the enhancement in terms of both
target domain and adversarial domain. For the target
domain, it expands the target to be attacked from one
to many to encourage a good generalization. For the
adversarial domain, the domain knowledge is embed-
ded to strengthen the attack effect from discrete points
to continuous manifold.
20576
•We instantiate GMAA in the face expression state
space for a semantic continuous adversarial manifold
and use it to attack a state set of the target identity. As a
minor contribution, GMAA supervises the adversarial
example generator w.r.t global structure and local de-
tails with the pre-trained expression editors for a high
visual quality.
|
Liang_Adaptive_Plasticity_Improvement_for_Continual_Learning_CVPR_2023
|
Abstract
Many works have tried to solve the catastrophic forget-
ting (CF) problem in continual learning (lifelong learning).
However, pursuing non-forgetting on old tasks may dam-
age the model’s plasticity for new tasks. Although some
methods have been proposed to achieve stability-plasticity
trade-off, no methods have considered evaluating a model’s
plasticity and improving plasticity adaptively for a new task.
In this work, we propose a new method, called a daptive
plasticity i mprovement (API), for continual learning. Be-
sides the ability to overcome CF on old tasks, API also tries
to evaluate the model’s plasticity and then adaptively im-
prove the model’s plasticity for learning a new task if nec-
essary. Experiments on several real datasets show that API
can outperform other state-of-the-art baselines in terms of
both accuracy and memory usage.
|
1. Introduction
Continual learning is a challenging setting in which
agents learn multiple tasks sequentially [21]. However, neu-
ral network models lack the ability to perform continual
learning. Specifically, many studies [15, 21] have shown
that directly training a network on a new task makes the
model forget the old knowledge. This phenomenon is often
called catastrophic forgetting (CF) [10, 21].
Continual learning models need to overcome CF, which
is referred to as stability [21]. Many types of works are pro-
posed for stability. For example, regularization-based meth-
ods [13,35] add a penalty to the loss function and minimize
penalty loss with new task loss together for overcoming CF.
Memory-based methods [5,6,24,29] maintain a memory to
save the information of the old tasks and use saved informa-
tion to keep old task performance. Expansion-based meth-
ods [12, 16] expand the network’s architecture and usually
freeze old tasks’ parameters to overcome CF.
However, having stability alone fails to give the model
*Wu-Jun Li is the corresponding author.continual learning ability. The model also needs plasticity
to learn new tasks in continual learning. The term plas-
ticity came from neuroscience and was originally used to
describe the brain’s ability to yield physical changes in
the neural structure. Plasticity allows us to learn, remem-
ber, and adapt to dynamic environments [22]. In neu-
ral networks, plasticity is used to describe the ability of
a network to change itself for learning new tasks. How-
ever, existing works [17, 18, 30] show that when overcom-
ing CF for stability, the model’s plasticity will decrease,
which will affect the performance of the model for learning
new tasks. Specifically, regularization-based methods and
memory-based methods use penalty or memory to constrain
the parameters when the model learns a new task. When the
number of old tasks increases, the constraints for the model
parameters should become stronger and stronger to ensure
stability. As a result, the model’s plasticity for learning new
tasks decreases. Expansion-based methods [28,32] increase
the model’s plasticity by expanding additional parameters.
However, most of these methods freeze the old part of the
network, making the old part of the network underutilized.
Furthermore, all these methods do not consider how to eval-
uate the model’s plasticity and improve it adaptively.
When overcoming CF, the model should improve its
plasticity if it finds that current plasticity is insufficient to
learn the new task. In this work, we propose a new method,
called a daptive p lasticity i mprovement (API), for continual
learning. The main contributions of API are as follows:
• API overcomes CF through a new memory-based
method called dual gradient projection mem-
ory (DualGPM), which learns a gradient subspace that
can represent the gradients of old tasks.
• Based on DualGPM, API evaluates the model’s plas-
ticity for a new task by average gradient retention ra-
tio (AGRR) and improves the model’s plasticity adap-
tively for a new task if necessary.
• Experiments on several real datasets show that API can
outperform other state-of-the-art baselines in terms of
accuracy and memory usage.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7816
2. Problem Formulation and Related Work
2.1. Problem Formulation
We consider the supervised continual learning setting
where Ttasks are presented to the model sequentially. Each
task has a dataset Dt={(xt
i, yt
i)}Nt
i=1sampled from a latent
distribution Dt, where xt
irepresents the input data point
andyt
irepresents its class label. A neural network model
f(·,Θ)with parameters Θis trained on these tasks sequen-
tially. The aim is to minimize the average loss of all tasks,
that is
L=1
TTX
t=1E(xt
i,yt
i)∼Dt
l(f(xt
i;Θ), yt
i)
. (1)
Here, l(·,·)is the loss function (e.g., cross-entropy). When
learning a new task t, the model has no access to the data of
the previous t−1tasks and it needs to learn new tasks while
maintaining the performance of old tasks. Like many recent
works [14, 17], we assume the task identity is available in
both training and inference stages.
2.
|
Khan_Localized_Semantic_Feature_Mixers_for_Efficient_Pedestrian_Detection_in_Autonomous_CVPR_2023
|
Abstract
Autonomous driving systems rely heavily on the underly-
ing perception module which needs to be both performant
and efficient to allow precise decisions in real-time. Avoid-
ing collisions with pedestrians is of topmost priority in any
autonomous driving system. Therefore, pedestrian detec-
tion is one of the core parts of such systems’ perception
modules. Current state-of-the-art pedestrian detectors have
two major issues. Firstly, they have long inference times
which affect the efficiency of the whole perception module,
and secondly, their performance in the case of small and
heavily occluded pedestrians is poor. We propose Local-
ized Semantic Feature Mixers (LSFM), a novel, anchor-free
pedestrian detection architecture. It uses our novel Super
Pixel Pyramid Pooling module instead of the, computation-
ally costly, Feature Pyramid Networks for feature encod-
ing. Moreover, our MLPMixer-based Dense Focal Detec-
tion Network is used as a light detection head, reducing
computational effort and inference time compared to ex-
isting approaches. To boost the performance of the pro-
posed architecture, we adapt and use mixup augmentation
which improves the performance, especially in small and
heavily occluded cases. We benchmark LSFM against the
state-of-the-art on well-established traffic scene pedestrian
datasets. The proposed LSFM achieves state-of-the-art per-
formance in Caltech, City Persons, Euro City Persons, and
TJU-Traffic-Pedestrian datasets while reducing the infer-
ence time on average by 55%. Further, LSFM beats the
human baseline for the first time in the history of pedestrian
detection. Finally, we conducted a cross-dataset evaluation
which proved that our proposed LSFM generalizes well to
unseen data.
|
1. Introduction
Autonomous driving is currently under the spotlight in
the computer vision community [3, 20]. Detecting and
avoiding collisions with pedestrians is one of the numer-
ous challenges of autonomous driving. Pedestrian detectors
for autonomous driving not only have to be performant but
efficient as well, since rapid perception is required to make
timely decisions. Furthermore, these systems need to ful-
fill additional constraints such as good portability and low
computational footprint, as compute-intensive systems can
have a heavy impact on the milage of autonomous vehicles.
Pedestrian detection for autonomous driving aims to pro-
vide the autonomous vehicle with a timely perception of
all pedestrians in its surroundings. The problem becomes
more challenging as most of the pedestrians are occluded
either by other pedestrians or by other objects [5,48]. Addi-
tionally, the camera stream is introduced with motion blur
since it is coming from the camera mounted on a moving
vehicle [15]. The motion blur problem further intensifies
when the vehicle moves faster. Dealing with motion blur
and occlusion is vital for a pedestrian detector to perform
well. Another major challenge for pedestrian detectors is
the scale variance in pedestrians. Since the camera images
are subject to perspective distortion the pedestrian scales
vary from a few pixels large to almost equal to the height of
the image frame. Small-scale pedestrians (far or short) are
the bottleneck of scale variance problem [5]. The pedestrian
detector needs to sufficiently understand the core visual fea-
tures of a pedestrian and use them to detect pedestrians ir-
respective of their scales.
Furthermore, domain generalization is critical for a
pedestrian detector as it is expected to perform in all cir-
cumstances e.g., all kinds of weather, lighting, and traffic
densities, which might or might not be part of the training
data [14, 15]. Therefore, pedestrian detectors should per-
form well on unseen data to be reliable under real-world
circumstances.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5476
Year05101520253035
2014 2016 2018 2020 2022F-RCNN
RPN + BF
ALFNet
CSP
Cascade RCNN
F2DNetSSD
YOLOv3City Persons
Euro City Persons
Caltech
Human Baseline
ACF
LDCF
Checkerboards
LSFM (ours)MR-2(a) Reasonable
Year1535557595
2014 2016 2018 2020 2022PRNetF-RCNN
RPN + BF
ALFNet
CSP
Cascade RCNN
F2DNetSSD
YOLOv3City Persons
Euro City Persons
Caltech
ACF
LDCF
Checkerboards
LSFM (ours)MR-2 (b) Heavy Occlusion
Figure 1. Performance of pedestrian detectors in different settings and their evolution over the years. Both figures contain data on three
different pedestrian detection datasets namely, City Persons [48] (Green), Euro City Persons [1] (Pink), and Caltech Pedestrians [10]
(Yellow). Y-axis values are %based in both (a, b). The proposed LSFM beats the human baseline on the Caltech dataset [47].
Recent research focuses on improving pedestrian detec-
tors in terms of accuracy while ignoring their computational
costs [5]. The performance of pedestrian detectors has im-
proved a lot recently. However, there is still room for im-
provement, especially in heavy occlusion and small cases
[14,15,20,24]. Fig. 1 shows the performance improvements
in pedestrian detection over the last decade. Further, an im-
provement in accuracy usually comes with an increase in
inference time [5], especially in the case of methods based
on Vision-Transformers (ViT) [6, 9, 30]. A similar trade-
off can be observed when using multi-modal sensor fusion.
The accuracy improves while bearing heavy computational
costs. A major component of ViT is self-attention, which
has a complexity of O(n2)and does not scale well for high-
resolution images [41]. Researchers have proposed alterna-
tives to self-attention to avoid heavy computational costs,
one of which is MLPMixers [37]. MLPMixer alternates
between the channel and token dimension, thus maximiz-
ing cache efficiency, and achieving almost similar perfor-
mance to transformers in image classification. However,
when the image resolution is high, the MLPMixer feature
map sizes increase quadratically, making it memory and
compute-intensive backbones for downstream tasks. Also,
the fully-connected nature of the MLP-based networks pre-
vents them from being resolution independent like convolu-
tions, as the number of parameters needs to be predefined.
We propose a novel pedestrian detection network that in-
cludes a Multi Layer Perceptron (MLP) based neck and a
patched MLP mixer-based object detection head [37]. The
proposed neck efficiently extracts and enriches key features
from different stages of the backbone, and the detection
head enables the dense connections between high-level se-
mantic features. Together, when combined with a back-bone, they constitute a lightweight, cache-efficient, and yet
performant pedestrian detector. To train our network to be
immune to motion blur and occlusion, we used hard mixup
augmentation, which provides our network with data for
soft occlusion and motion blur-like effects. Also, the hard
mixup augmentation generates additional data for small de-
tection cases to help the network absorb the key features
which work across all scales.
We conduct an exhaustive evaluation of the proposed
network on renowned pedestrian datasets to test it against
the existing state-of-the-art methods in terms of both per-
formance and efficiency. We conduct a cross dataset eval-
uation to test the domain generalization capabilities of the
proposed network. Further, we perform the ablation study
to check the effectiveness of different components of the
proposed network. Major contributions of this work are as
follows:
We propose Super Pixel Pyramid Pooling (SP3), a
MLP-based feature pyramid network.
We propose Dense Focal Detection Network (DFDN),
a lightweight head to allow denser connections.
We pre-trained a deep but not wide ConvMLP [21]
based backbone, ConvMLP Pin, for the proposed net-
work to reduce inference time.
We propose pedestrian detectors with backbones of
different sizes to enable applications in resource-
constrained environments.
Our proposed model beats the human baseline [47] for
the first time in the history of pedestrian detection.
5477
|
Liu_Semi-Weakly_Supervised_Object_Kinematic_Motion_Prediction_CVPR_2023
|
Abstract
Given a 3D object, kinematic motion prediction aims to
identify the mobile parts as well as the corresponding mo-
tion parameters. Due to the large variations in both topo-
logical structure and geometric details of 3D objects, this
remains a challenging task and the lack of large scale la-
beled data also constrain the performance of deep learning
based approaches. In this paper, we tackle the task of object
kinematic motion prediction problem in a semi-weakly su-
pervised manner. Our key observations are two-fold. First,
although 3D dataset with fully annotated motion labels is
limited, there are existing datasets and methods for object
part semantic segmentation at large scale. Second, seman-
tic part segmentation and mobile part segmentation is not
always consistent but it is possible to detect the mobile parts
from the underlying 3D structure. Towards this end, we pro-
pose a graph neural network to learn the map between hier-
archical part-level segmentation and mobile parts parame-
ters, which are further refined based on geometric align-
ment. This network can be first trained on PartNet-Mobility
dataset with fully labeled mobility information and then ap-
plied on PartNet dataset with fine-grained and hierarchical
part-level segmentation. The network predictions yield a
large scale of 3D objects with pseudo labeled mobility in-
formation and can further be used for weakly-supervised
learning with pre-existing segmentation. Our experiments
show there are significant performance boosts with the aug-
mented data for previous method designed for kinematic
motion prediction on 3D partial scans.
|
1. Introduction
In this work, we study the problem of 3D object kine-
matic motion prediction. As a key aspect of functional
analysis of 3D shape, the understanding of part mobili-
ties plays an important role in applications, such as robot-
*Ruizhen Hu is the corresponding author.
/gid00027/gid00068/gid00059/gid00059/gid00072/gid00007/gid00059/gid00048/gid00049/gid00052/gid00059/gid00052/gid00051/gid00001
/gid00060/gid00062/gid00067/gid00056/gid00062/gid00061/gid00001/gid00063/gid00048/gid00065/gid00048/gid00060/gid00052/gid00067/gid00052/gid00065/gid00066
/gid00044/gid00052/gid00048/gid00058/gid00059/gid00072/gid00007/gid00059/gid00048/gid00049/gid00052/gid00059/gid00052/gid00051/gid00001
/gid00040/gid00052/gid00060/gid00048/gid00061/gid00067/gid00056/gid00050/gid00001/gid00040/gid00052/gid00054/gid00060/gid00052/gid00061/gid00067/gid00048/gid00067/gid00056/gid00062/gid00061
/gid00034/gid00062/gid00067/gid00056/gid00062/gid00061/gid00001/gid00063/gid00065/gid00052/gid00051/gid00056/gid00050/gid00067/gid00056/gid00062/gid00061/gid00001
/gid00062/gid00061/gid00001/gid00012/gid00025/gid00001/gid00063/gid00048/gid00065/gid00067/gid00056/gid00048/gid00059/gid00001/gid00066/gid00050/gid00048/gid00061/gid00066
/gid00041/gid00065/gid00048/gid00056/gid00061/gid00001/gid00060/gid00062/gid00067/gid00056/gid00062/gid00061/gid00001/gid00063/gid00065/gid00052/gid00051/gid00056/gid00050/gid00067/gid00056/gid00062/gid00061/gid00001
/gid00060/gid00062/gid00051/gid00052/gid00059/gid00001/gid00053/gid00062/gid00065/gid00001/gid00070/gid00052/gid00048/gid00058/gid00059/gid00072/gid00007/gid00059/gid00048/gid00049/gid00052/gid00059/gid00052/gid00051/gid00001/gid00051/gid00048/gid00067/gid00048/gid00028/gid00052/gid00061/gid00052/gid00065/gid00048/gid00067/gid00052/gid00001/gid00063/gid00066/gid00052/gid00068/gid00051/gid00062/gid00007/gid00059/gid00048/gid00049/gid00052/gid00059/gid00066/gid00001
/gid00062/gid00061/gid00001/gid00070/gid00052/gid00048/gid00058/gid00059/gid00072/gid00007/gid00059/gid00048/gid00049/gid00052/gid00059/gid00052/gid00051/gid00001/gid00051/gid00048/gid00067/gid00048/gid00023/gid00062/gid00062/gid00066/gid00067/gid00001/gid00067/gid00055/gid00052/gid00001/gid00063/gid00052/gid00065/gid00053/gid00062/gid00065/gid00060/gid00048/gid00061/gid00050/gid00052/gid00001/gid00062/gid00053
/gid00063/gid00065/gid00052/gid00051/gid00056/gid00050/gid00067/gid00056/gid00062/gid00061/gid00001/gid00062/gid00061/gid00001/gid00063/gid00048/gid00065/gid00067/gid00056/gid00048/gid00059/gid00001/gid00066/gid00050/gid00048/gid00061/gid00066Figure 1. Our semi-weakly supervised method for object kine-
matic motion prediction, which utilize the weakly labeled dataset
with part semantic segmentation to augment the training data for
object kinematic motion prediction on partial scans.
environment interaction, object functionality prediction, as
well as 3D object reconstruction and animation.
There have been a few methods on discovering part mo-
bility from various inputs, including 3D objects and se-
quence of RGBD scans, via both supervised or unsuper-
vised learning [8, 11, 22, 26, 28]. Despite the rapid develop-
ment in this filed, it remains challenging to boost the gener-
alization performance of these methods: there is significant
performance degeneration when the queried objects present
different topology, structure, or geometric details that are
out of the space of training examples.
However, it is with great difficulty to collect fully an-
notated dataset for kinematic motion prediction, which re-
quires not only segmentation of object w.r.t motion but also
the associated parameters. Furthermore, it is infeasible to
collect such dataset at large scale, since there are only a
few dataset with limited size available for motion prediction
task such as PartNet-Mobility [24] and Shape2Motion [22].
Meanwhile, as a long lasting task in both graphics and vi-
sion, there have been massive methods proposed with large
scale dataset of segmented 3D objects. Among them, Part-
Net dataset [17] consists of more than 25k 3D models of
24 object categories with the corresponding fine-grained
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21726
and hierarchical 3D part information. Given such a dataset
that covers sufficient variations in 3D topological structure
and geometric details, we wonder if we can bridge the best
of two worlds: transfer the fully labeled information from
PartNet-Mobility into PartNet, and leverage the augmented
data for motion prediction learning in a semi-weakly super-
vised manner, as shown in Figure 1.
Such transfer is non-trivial and the main challenge lies
in the fact that existing segmentation of a 3D model does
not necessarily comply with its mobilities. For example, for
most car models, the door is not a separate component. This
fact significantly increases the difficulty of transfer: it is not
just adding mobility parameters to the given segmentation
but also to find the appropriate mobile parts.
Since both PartNet-Mobility and PartNet provide fine-
grained and hierarchical part-level segmentation, we unitize
this information and convert the mobile part detection task
into a part selection task from the given hierarchy. Specif-
ically, we develop a two-stage pipeline in a coarse-to-fine
manner. In the first stage, we construct a graph that encodes
both the parent-child hierarchy and adjacent part pairs. We
then train a graph neural network to predict the motion part
at the edge level, including its motion type and correspond-
ing parameters. It is trained with PartNet-Mobility data and
then applied on PartNet to get the initial prediction. In the
second stage, we introduce a refinement process to filter out
prediction with low feasibility score and consistency score,
refine the motion axis using a heuristic strategy, and gener-
ate the final pseudo labels as motion prediction results.
To evaluate the effectiveness of our generated pseudo
prediction, we further apply it on the task of motion pre-
diction on partial scans. We adapt the ANCSH method pro-
posed in [13] and enhance its training with the augmented
data. Our experiments show that the prediction accuracy
can be improved by a large margin and our approach out-
performs state-of-the-art methods under such semi-weakly
supervised learning pipeline.
To summarize, our contributions are as follow:
• We are the first to tackle the problem of object kine-
matic motion prediction using a semi-weakly super-
vised learning framework. Specially, we successfully
transfer the fully labeled motion information from a
dataset of limited examples to a large set of unlabeled
3D shapes by leveraging the pre-existing segmenta-
tion hierarchy information. The augmented data can
be used for weakly supervised learning at large scale.
• We propose a robust two-stage pipeline for motion in-
formation transfer: A graph neural network is first
adapted to train and predict the map between seman-
tic segmentation hierarchy and part motion properties.
Then a heuristic strategy is presented to refine and out-
put the final predictions.• We further evaluate our pipeline on the task of 3D par-
tial scans motion prediction. Our experiments show
that existing approaches trained with the data aug-
mented by our method can be improved significantly
in terms of prediction accuracy.
|
Leclerc_FFCV_Accelerating_Training_by_Removing_Data_Bottlenecks_CVPR_2023
|
Abstract
We present FFCV , a library for easy and fast machine
learning model training. FFCV speeds up model training by
eliminating (often subtle) data bottlenecks from the training
process. In particular, we combine techniques such as an ef-
ficient file storage format, caching, data pre-loading, asyn-
chronous data transfer, and just-in-time compilation to (a)
make data loading and transfer significantly more efficient,
ensuring that GPUs can reach full utilization; and (b) of-
fload as much data processing as possible to the CPU asyn-
chronously, freeing GPU cycles for training. Using FFCV ,
we train ResNet-18 and ResNet-50 on the ImageNet dataset
with a state-of-the-art tradeoff between accuracy and train-
ing time. For example, across the range of ResNet-50 mod-
els we test, we obtain the same accuracy as the best base-
lines in half the time. We demonstrate FFCV ’s performance,
ease-of-use, extensibility, and ability to adapt to resource
constraints through several case studies. Detailed installa-
tion instructions, documentation, and Slack support chan-
nel are available at https://ffcv.io/ .
|
1. Introduction
What’s the limiting factor in faster model training? Hint:
it isn’t always the GPUs. When training a machine learning
model, the life cycle of an individual example spans three
stages: reading the example into memory, processing the
example in memory, and finally updating model parameters
with the example on GPU (e.g. by calculating and then fol-
lowing the gradient). The stage with the lowest throughput
determines the overall learning system’s throughput.
Our investigations (and others’ [ Mohan et al.(2021) ])
show that in practice the limiting factor is often not com-
puting model updates but rather the data reading and data
processing stages. Indeed, in standard training setups, the
GPUs can spend a majority of cycles just waiting for inputs
to process!
*Equal contribution.Figure 1. Accuracy vs. training time when training a ResNet-50 on
8 A100s. The FFCV accuracy/training time tradeoff outperforms
all baselines. As an example, we can train ImageNet to 75% accu-
racy in less than 20 minutes on a single machine.
To better saturate GPUs and thereby increase training
throughput, we present FFCV , a system designed to reduce
data loading and processing bottlenecks while remaining
simple to use. FFCV operates in two successive stages: pre-
processing andtrain-time loading .
In the first stage, FFCV preprocesses the dataset into a
format more amenable to high throughput loading. Then,
in the train-time loading stage, FFCV ’s data loader replaces
the original learning system’s data loader without requiring
anyother implementation changes.
Together, FFCV data preprocessing and the FFCV data
loader can drastically increase training speeds without any
learning algorithm modifications. To demonstrate, we train
machine learning models for a number of tasks much faster
than previous general purpose data loaders can support, in-
cluding single-node ResNet-50 [ He et al.(2015) ] training
(2x faster than the previous state-of-the-art to reach the
same accuracies) and parallel ResNet-18 training (we can
train 14 models per minute on an 8 GPU machine). While
FFCV improves performance on most GPUs, its effect is
most pronounced on faster GPUs, which require higher
throughput data loading to saturate available compute ca-
pacity. We expect FFCV will only increase in utility as new
GPUs become faster.
Contributions. We introduce FFCV , a drop-in, general
purpose training system for high throughput data loading.
Using FFCV requires noalgorithmic changes, and involves
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12011
a nearly identical API to standard data loading systems
(e.g., the default PyTorch data loader). FFCV automatically
handles the necessary data transfer, memory management,
and data conversion work that users usually manually opti-
mize (or leave to subopti
|
Liu_Joint_HDR_Denoising_and_Fusion_A_Real-World_Mobile_HDR_Image_CVPR_2023
|
Abstract
Mobile phones have become a ubiquitous and indis-
pensable photographing device in our daily life, while the
small aperture and sensor size make mobile phones more
susceptible to noise and over-saturation, resulting in low
dynamic range (LDR) and low image quality. It is thus
crucial to develop high dynamic range (HDR) imaging
techniques for mobile phones. Unfortunately, the existing
HDR image datasets are mostly constructed by DSLR cam-
eras in daytime, limiting their applicability to the study of
HDR imaging for mobile phones. In this work, we de-
velop, for the first time to our best knowledge, an HDR
image dataset by using mobile phone cameras, namely
Mobile-HDR dataset. Specifically, we utilize three mo-
bile phone cameras to collect paired LDR-HDR images in
the raw image domain, covering both daytime and night-
time scenes with different noise levels. We then propose
a transformer based model with a pyramid cross-attention
alignment module to aggregate highly correlated features
from different exposure frames to perform joint HDR de-
noising and fusion. Experiments validate the advantages
of our dataset and our method on mobile HDR imaging.
Dataset and codes are available at https://github.
com/shuaizhengliu/Joint-HDRDN .
|
1. Introduction
With the rapid development of mobile communication
techniques and digital imaging sensors, mobile phones have
surpassed DSLR cameras and become the most prevalent
device for photography in our daily life. Nonetheless, due to
the low dynamic range (LDR) of mobile phone sensors [7],
the captured images may lose details in dark and bright
regions under challenging lighting conditions. Therefore,
high dynamic range (HDR) imaging [3] is critical for im-
proving the quality of mobile phone photography.
*Corresponding author. This work is supported by the Hong Kong RGC
RIF grant (R5001-18) and the PolyU-OPPO Joint Innovation Lab.Actually, HDR imaging has been a long standing re-
search topic in computational photography, even for DSLR
cameras. An effective and commonly used way to construct
an HDR image is to fuse a stack of LDR frames with dif-
ferent exposure levels. If the multiple LDR frames can be
well aligned ( e.g., in static scenes), they can be easily fused
to generate the HDR image [3, 23]. Unfortunately, in dy-
namic scenes where there exist camera shaking and/or ob-
ject motion, the fused HDR image may introduce ghost ar-
tifacts caused by inaccurate alignment [45]. Some deghost-
ing methods have been proposed to reject pixels which can
be hardly registered [12, 16]. However, precisely detecting
moving pixels is challenging and rejecting too many pixels
will sacrifice useful information for HDR fusion.
In the past decade, deep learning [15] has demonstrated
its powerful capability to learn image priors from a rich
amount of data [4]. Unfortunately, the development of
deep models for HDR imaging is relatively slow, mainly
due to the lack of suitable training datasets. Kalantari
et al . [10] built the first dataset with LDR-HDR image
pairs by DSLR cameras in daytime. Benefiting from this
dataset, many deep learning algorithms have been proposed
for HDR imaging. Some works [10] employ the convolu-
tional neural network (CNN) for fusion after aligning mul-
tiple frames with optical flow [18], which is however unreli-
able under occlusion and large motions. Subsequent works
resort to employing various networks to directly reconstruct
the HDR image from LDR frames. Liu et al. [19] developed
a deformable convolution based module to align the features
of input frames. Yan et al. [40] proposed a spatial attention
mechanism to suppress undesired features and employed a
dilated convolution network [42] for frame fusion. Follow-
ing this spatial attention mechanism, some fusion networks
have been developed with larger receptive fields, such as
non-local networks [41] and Transformer networks [21].
Though the dataset developed in [10] has largely facili-
tated the research of deep learning on HDR imaging, it is
not well suited for the investigation of HDR imaging tech-
niques for mobile phone cameras. Firstly, due to the small
aperture and sensor size, images captured by mobile phones
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13966
are more susceptible to noise than DSLR cameras, espe-
cially in nighttime. However, the images in dataset [10]
are generally very clean since they are collected by DSLR
cameras in daytime. Compared with DSLR cameras, the
normal exposed frame of mobile phone cameras contains
stronger noise, which should be reduced by fusing with
other frames. Secondly, mobile phone cameras often has
fewer recording bits (12 bit) than DSLR (14 bit), resulting
in larger overexposure areas in the reference frame. There-
fore, mobile HDR imaging is a more challenging problem,
and new dataset and new solutions are demanded.
To address the above limitations of existing HDR
datasets and facilitate the research on real-world mobile
HDR imaging, we establish a new HDR dataset, namely
Mobile-HDR, by using mobile phone cameras. Specifically,
we utilize three mobile phones to collect LDR-HDR im-
age pairs in raw image domain, covering both daytime and
nighttime scenes with different noise levels. In order to ob-
tain high-quality ground truth of HDR images, we first col-
lect noise-free LDR images under each exposure by multi-
frame averaging, and then synthesize the ground truth HDR
image by fusing the generated clean LDR frames. For dy-
namic scenes with object motion, we follow [10] to first
capture multiple exposed frames from static scenes to syn-
thesize the ground truth HDR images, and then replace the
non-reference frames with the images captured in dynamic
scenes as input. To our best knowledge, this is the first mo-
bile HDR dataset with paired training data.
With the established dataset, we propose a new trans-
former based model for joint HDR denoising and fusion.
To enhance denoising and achieve alignment, we design
a pyramid cross-attention module to implicitly align and
fuse input features. The cross-attention operation enables
searching and aggregating highly correlated features from
different frames, while the pyramid structure facilitates the
feature alignment under severe noise, large overexposure
and large motion. A transformer module is then applied
to fuse the aligned features for HDR image recovery.
The contributions of our work can be summarized as fol-
lows. First, we build the first mobile HDR dataset with
LDR-HDR image pairs under various scenes. Second, we
propose a cross-attention based alignment module to per-
form effective joint HDR denoising and fusion. Third, we
perform extensive experiment to validate the advantages of
our dataset and model. Our work provides a new platform
for researchers to investigate and evaluate real-world mo-
bile HDR imaging techniques.
|
Ke_Mask-Free_Video_Instance_Segmentation_CVPR_2023
|
Abstract
The recent advancement in Video Instance Segmentation
(VIS) has largely been driven by the use of deeper and in-
creasingly data-hungry transformer-based models. How-
ever, video masks are tedious and expensive to annotate,
limiting the scale and diversity of existing VIS datasets. In
this work, we aim to remove the mask-annotation require-
ment. We propose MaskFreeVIS, achieving highly compet-
itive VIS performance, while only using bounding box an-
notations for the object state. We leverage the rich tempo-
ral mask consistency constraints in videos by introducing
the Temporal KNN-patch Loss (TK-Loss), providing strong
mask supervision without any labels. Our TK-Loss finds
one-to-many matches across frames, through an efficient
patch-matching step followed by a K-nearest neighbor se-
lection. A consistency loss is then enforced on the found
matches. Our mask-free objective is simple to implement,
has no trainable parameters, is computationally efficient,
yet outperforms baselines employing, e.g., state-of-the-art
optical flow to enforce temporal mask consistency. We val-
idate MaskFreeVIS on the YouTube-VIS 2019/2021, OVIS
and BDD100K MOTS benchmarks. The results clearly
demonstrate the efficacy of our method by drastically nar-
rowing the gap between fully and weakly-supervised VIS
performance. Our code and trained models are available
athttp://vis.xyz/pub/maskfreevis .
|
1. Introduction
Video Instance Segmentation (VIS) requires jointly de-
tecting, tracking and segmenting all objects in a video from
a given set of categories. To perform this challenging
task, state-of-the-art VIS models are trained with complete
video annotations from VIS datasets [39, 61, 64]. However,
video annotation is costly, in particular regarding object
mask labels. Even coarse polygon-based mask annotation
is multiple times slower than annotating video bounding
boxes [7]. Expensive mask annotation makes existing VIS
benchmarks difficult to scale, limiting the number of object
categories covered. This is particularly a problem for the re-
cent transformer-based VIS models [6,17,57], which tend to
be exceptionally data-hungry. We therefore revisit the need
for complete mask annotation by studying the problem of
weakly supervised VIS under the mask-free setting .
While there exist box-supervised instance segmenta-
tion models [13, 23, 27, 50], they are designed for images.
These weakly-supervised single-image methods do not uti-
lize temporal cues when learning mask prediction, leading
to lower accuracy when directly applied to videos. As a
source for weakly supervised learning, videos contain much
richer information about the scene. In particular, videos ad-
here to the temporal mask consistency constraint, where the
regions corresponding to the same underlying object across
different frames should have the same mask label. In this
work, we set out to leverage this important constraint for
mask-free learning of VIS.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22857
Temporal One-to-K Patch Correspondence
VIS Mask Prediction at Frame 𝑇VIS Mask Prediction at Frame 𝑇+1Figure 2. Our Temporal KNN-patch Loss enforces mask con-
sistency between one-to- kpatch correspondences found across
frames, which allow us to cover the cases where: (i)A unique
one-to-one match exists (blue); (ii)Multiple matches are found
due to ambiguities in homogenuous regions (orange) or along im-
age edges (white and yellow); (iii)No match is found due to e.g.
occlusions (green). This allows us to robustly leverage mask con-
sistency constraints in challenging videos.
We propose MaskFreeVIS method, for high performance
VIS without any mask annotations. To leverage temporal
mask consistency, we introduce the Temporal KNN-patch
Loss (TK-Loss), as in Figure 2. To find regions correspond-
ing to the same underlying video object, our TK-Loss first
builds correspondences across frames by patch-wise match-
ing. For each target patch, only the top Kmatches in the
neighboring frame with high enough matching score are se-
lected. A temporal consistency loss is then applied to all
found matches to promote the mask consistency. Specif-
ically, our surrogate objective function not only promotes
the one-to- kmatched regions to reach the same mask prob-
abilities, but also commits their mask prediction to a confi-
dent foreground or background prediction by entropy mini-
mization. Unlike flow-based models [32,45], which assume
one-to-one matching, our approach builds robust and flex-
ible one-to- kcorrespondences to cope with e.g. occlusions
and homogeneous regions, without introducing additional
model parameters or inference cost.
The TK-Loss is easily integrated into existing VIS meth-
ods, with no architecture modifications required. Dur-
ing training, our TK-Loss simply replaces the conventional
video mask losses in supervising video mask generation.
To further enforce temporal consistency through the video
clip, TK-Loss is employed in a cyclic manner instead of
using dense frame-wise connections. This greatly reduces
memory cost with negligible performance drop.
We extensively evaluate our MaskFreeVIS on four large-
scale VIS benchmarks, i.e., YouTube-VIS 2019/2021 [61],
OVIS [39], and BDD100K MOTS [64]. MaskFreeVIS
achieves competitive VIS performance without using any
video masks or even image mask labels on all datasets. Val-
idated on various methods and backbones, MaskFreeVIS
achieves 91.25% performance of its fully supervised coun-
terparts, even outperforming a few recent fully-supervised
methods [10, 15, 18, 59] on the popular YTVIS bench-
mark. Our simple yet effective design greatly narrows
the performance gap between weakly-supervised and fully-Table 1. Mask annotation requirement for state-of-the-art VIS
methods. Results are reported using ResNet-50 as backbone on the
YTVIS 2019 [61] benchmark. Video Mask : using YTVIS video
mask labels. Image Mask : using COCO [30] image mask labels
for image-based pretraining. Pseudo Video : using Pseudo Videos
from COCO images for joint training [57]. MaskFreeVIS achieves
91.5% (42.5 vs. 46.4) of its fully-supervised baseline performance
(Mask2Former) without using any masks during training.
MethodVideo
MaskImage
MaskPseudo
VideoAP
SeqFormer [57] ✓ ✓ ✓ 47.4
VMT [17] ✓ ✓ ✓ 47.9
Mask2Former [6] ✓ ✓ ✓ 47.8
MaskFreeVIS (ours) ✗ ✓ ✓ 46.6
Mask2Former [6] ✓ ✓ ✗ 46.4
MaskFreeVIS (ours) ✗ ✗ ✗ 42.5
supervised video instance segmentation. It further demon-
strates that expensive video masks, or even image masks, is
not necessary for training high-performing VIS models.
Our contributions are summarized as follows: (i)To uti-
lize temporal information, we develop a new parameter-
free Temporal KNN-patch Loss, which leverages temporal
masks consistency using unsupervised one-to- kpatch cor-
respondence. We extensively analyze the TK-Loss through
ablative experiments. (ii)Based on the TK-Loss, we de-
velop the MaskFreeVIS method, enabling training existing
state-of-the-art VIS models without any mask annotation.
(iii)To the best of our knowledge, MaskFreeVIS is the first
mask-free VIS method attaining high-performing segmen-
tation results. We provide qualitative results in Figure 1.
As in Table 1, when integrated into the Mask2Former [6]
baseline with ResNet-50, our MaskFreeVIS achieves 42.5%
mask AP on the challenging YTVIS 2019 benchmark while
using novideo or image mask annotations. Our approach
further scales to larger backbones, achieving 55.3% mask
AP on Swin-L backbone with novideo mask annotations.
We hope our approach will facilitate achieving label-
efficient video instance segmentation, enabling building
even larger-scale VIS benchmarks with diverse categories
by lifting the mask annotation restriction.
|
Li_Edge-Aware_Regional_Message_Passing_Controller_for_Image_Forgery_Localization_CVPR_2023
|
Abstract
Digital image authenticity has promoted research on im-
age forgery localization. Although deep learning-based
methods achieve remarkable progress, most of them usu-
ally suffer from severe feature coupling between the forged
and authentic regions. In this work, we propose a two-step
Edge-aware Regional Message Passing Controlling strat-
egy to address the above issue. Specifically, the first step
is to account for fully exploiting the edge information. It
consists of two core designs: context-enhanced graph con-
struction and threshold-adaptive differentiable binarization
edge algorithm. The former assembles the global semantic
information to distinguish the features between the forged
and authentic regions, while the latter stands on the output
of the former to provide the learnable edges. In the sec-
ond step, guided by the learnable edges, a region message
passing controller is devised to weaken the message passing
between the forged and authentic regions. In this way, our
ERMPC is capable of explicitly modeling the inconsistency
between the forged and authentic regions and enabling it
to perform well on refined forged images. Extensive exper-
iments on several challenging benchmarks show that our
method is superior to state-of-the-art image forgery local-
ization methods qualitatively and quantitatively.
|
1. Introduction
The forged or manipulated images pose risks in vari-
ous fields, such as removing copyright watermarks, gener-
ating fake news, and even falsifying evidence in court [32].
The growing technology of forgery will cause a crisis of
trust and affect social equity. Therefore, the detection of
image forgery is of great significance. The crucial aspect
∗Co-first authors contributed equally,†corresponding author. This
work was supported by the National Key R&D Program of China un-
der Grant 2020AAA0105702, the National Natural Science Foundation
of China (NSFC) under Grants 62225207, 62276243, 62106245 and
U19B2038, the University Synergy Innovation Program of Anhui Province
under Grant GXXT-2019-025.
Previous
OursFigure 1. The difference between previous methods and ours. Our
method controls the message passing between the forged and au-
thentic regions, ignored by the previous method. As shown by %.
of the detection is to model the inconsistency between the
forged and authentic regions and to locate the forged re-
gions on the suspicious image, i.e., image forgery local-
ization (IFL). However, as post-processing techniques such
as GAN [16, 26, 63], V AE [27, 44] and homogeneous ma-
nipulation [7, 33] are wildly utilized, images can be eas-
ily tampered in a visually imperceptible way. These tech-
niques constantly aim to couple the forged and authentic
regions’ features, making image forgery localization chal-
lenging. Therefore, in order to accurately locate the image
forgery region, it is particularly essential to decouple the
features between forged and authentic regions.
In recent years, deep-learning techniques have attracted
more attention [23, 57, 58, 65, 66]. Due to the development
of deep learning, image forgery localization has achieved
remarkable results. For example, ManTra-Net [55] treats
the forgery localization problem as a local anomaly detec-
tion problem and proposes a novel long short-term mem-
ory solution to assess local anomalies. To discriminate
between heterologous regions, SPAN employs CNNs to
extract anomalous local noise features from noise maps.
MVSS-Net [5] learns a multiview feature with multi-scale
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8222
supervised networks to jointly exploit the noise view and
the boundary artifacts. However, these methods do not de-
couple the features between the forged and authentic re-
gions, making it difficult to locate the tampered regions for
elaborately forged images accurately. As shown in Figure 1,
in the previous method, the features of the forged region are
coupled with some of the features of the authentic region,
resulting in wrong localization.
In this work, we propose a novel method to avoid fea-
ture coupling of the two regions (forged and authentic) for
image forgery localization. One of the keys of this method
is to construct a dynamic graph, where the edges between
the forged and authentic regions participate in its construc-
tion. We control the message passing of regions inside and
outside the edges (i.e., forged and authentic) by reconstruct-
ing the adjacency matrix of nodes inside and outside the
edges, thus achieving effective disentanglement of the fea-
tures between the forged and authentic regions. Based on its
functionality, the edge-aware dynamic graph convolution is
named the regional message passing controller (RMPC).
In order to use the proposed RMPC for image forgery
localization, it is necessary to obtain the edge information
between the forged and authentic regions, which is the other
key of this method. Therefore, an edge reconstruction (ER)
module was developed, including a context-enhanced graph
(CEG) and a threshold-adaptive differentiable binarization
module. We specially design an adjacency matrix learner in
the CEG, which encodes global information along the nodes
to assemble the global semantic information. Inspired by
the Sigmoid function, we develop the threshold-adaptive
differentiable binarization edge algorithm, which stands on
the output of the CEG to provide the learnable edges.
In summary, in this work we propose a new two-step
framework named Edge-aware Regional Message Passing
Controller (ERMPC) for Image Forgery Localization, in-
cluding RMPC and ER. The ERMPC could effectively con-
trol the message passing between the forged and authentic
regions to achieve effective disentanglement of the two re-
gions, thus boosting the performance of image forgery lo-
calization. We take edge information as the main task and
use it as a basis to explicitly model the inconsistency be-
tween two regions. To the best of our knowledge, this work
is the first attempt to explicitly weaken the message passing
between the forged and authentic regions. Our contributions
are as follows:
• We propose ERMPC, a novel two-step coarse-to-fine
framework for image forgery localization, explicitly
modeling the inconsistency between the forged and au-
thentic regions with edge information.
• We propose an edge-aware dynamic graph, also known
as RMPC, to control the message passing between two
regions (forged and authentic) in the feature map.• We develop an edge reconstruction module containing
a context-enhanced graph and a threshold-adaptive dif-
ferentiable binarization module to obtain the desired
edge information.
• We conduct extensive experiments on multiple bench-
marks and demonstrate that our method is qualitatively
and quantitatively superior to state-of-the-art image
forgery localization methods.
|
Le_Music-Driven_Group_Choreography_CVPR_2023
|
Abstract
Music-driven choreography is a challenging problem
with a wide variety of industrial applications. Recently,
many methods have been proposed to synthesize dance mo-
tions from music for a single dancer. However, generat-
ing dance motion for a group remains an open problem.
In this paper, we present AIOZ−GDANCE , a new large-
scale dataset for music-driven group dance generation. Un-
like existing datasets that only support single dance, our
new dataset contains group dance videos, hence support-
ing the study of group choreography. We propose a semi-
autonomous labeling method with humans in the loop to
obtain the 3D ground truth for our dataset. The proposed
dataset consists of 16.7hours of paired music and 3D mo-
tion from in-the-wild videos, covering 7dance styles and 16
music genres. We show that naively applying single dance
generation technique to creating group dance motion may
lead to unsatisfactory results, such as inconsistent move-
ments and collisions between dancers. Based on our new
dataset, we propose a new method that takes an input music
sequence and a set of 3D positions of dancers to efficiently
produce multiple group-coherent choreographies. We pro-
pose new evaluation metrics for measuring group dance
quality and perform intensive experiments to demonstrate
the effectiveness of our method. Our project facilitates fu-
ture research on group dance generation and is available at
https://aioz-ai.github.io/AIOZ-GDANCE/ .
|
1. Introduction
Dancing is an important part of human culture and re-
mains one of the most expressive physical art and com-
munication forms [ 17,25]. With the rapid development of
digital social media platforms, creating dancing videos has
gained significant attention from social communities. As
the consequence, millions of dancing videos are created and
watched daily on online platforms. Recently, studies of how
to create natural dancing motion from music have attracted
great attention in the research community [ 8]. The outcome
of dancing generation techniques can be applied to various
applications such as animation [ 39], virtual idol [ 47], meta-
verse [ 35], or in dance education [ 4,51].
Although there is some progress towards synthesizing
realistic dancing motion from music in recent literature [ 26,
39,47,53] creating natural 3D dancing motions from the
input audio remains an open problem [ 39]. This is mainly
due to (i)the complex structure and the non-linear correla-
tion between continuous human motion and the accompa-
nying music audio, (ii)the variety in the repertoire of danc-
ing motions for an expressive choreography performance,
(iii)the difficulty in generating long motion sequences, and
(iv)the complication of capturing the correspondences be-
tween the dancing motion and audio such as dancing styles
or music rhythms. Furthermore, recent works focus on gen-
erating dancing motion for solo dancer [ 16,20,39,61] while
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8673
producing dancing motion for a group of dancers have not
been well-investigated by the community yet.
Compared to the single dance generation task, the group
dance generation poses a more challenging problem [ 13].
In practice, group dance may contain complicated and dif-
ferent choreographies between attended dancers while also
retaining the relationship between the motion and the in-
put music rhythmically. Furthermore, group dance has the
communication between dancers through physical contact,
hence performing a correlation between motion series in a
group is essential and challenging. Currently, most of the
music-to-dance datasets [ 37,39,47,55,58,66] only con-
tain solo dance videos. Hence, they are not able to cap-
ture essential aspects that only occurred in group dance sce-
narios, including multiple-person motions, synchronization,
and interaction between dancers.
Since learning to synthesize dancing motion from mu-
sic is a highly challenging task, one of the primary require-
ments is the availability of large-scale datasets. Some works
employ professional choreographers to obtain music-dance
datasets under a highly complex Motion Capture (MoCap)
system [ 10,37,58,63]. Although the captured motions are
accurate, it is challenging to scale up and increase the di-
versity of the data with several dance styles and music gen-
res. To overcome the limitation of the MoCap system, an-
other line of works leverage existing pose estimation algo-
rithm to generate the pseudo-ground truths for in-the-wild
dancing videos [ 20,34,55]. However, these aforementioned
datasets are designed originally for the single motion gener-
ation task and provide only paired music and single dancing
motion [ 37,39,47], thus they cannot be applied to facilitate
generating multiple motions within a group of dancers.
Motivated by these shortcomings, this paper introduces
AIOZ-GDANCE , a new large-scale dataset to advance the
study of group choreography. Unlike existing choreogra-
phy datasets that only supports single dancer, our dataset
consists of group dance videos. As in Figure 1, our dataset
has multiple input modalities (i.e., video frames, audio) and
multiple 3D human mesh ground truths. To annotate the
dataset, we introduce a semi-automatic method with hu-
mans in the loop to ensure the data quality. Using the new
dataset, we propose the first strong baseline for group dance
generation that can jointly generate multiple dancing mo-
tions expressively and coherently.
Our contributions are summarised as follows:
• We introduce AIOZ-GDANCE , a new large-scale
dataset for group dance generation. To our best knowl-
edge, AIOZ-GDANCE is the largest audio-driven
group dance dataset.
• Based on our new dataset, we propose a new method,
namely GDanceR, to efficiently generate group danc-
ing motion from the input audio.
|
Li_MEGANE_Morphable_Eyeglass_and_Avatar_Network_CVPR_2023
|
Abstract
Eyeglasses play an important role in the perception of
identity. Authentic virtual representations of faces can ben-
efit greatly from their inclusion. However, modeling the
geometric and appearance interactions of glasses and the
face of virtual representations of humans is challenging.
Glasses and faces affect each other’s geometry at their con-
tact points, and also induce appearance changes due to
light transport. Most existing approaches do not capture
these physical interactions since they model eyeglasses and
faces independently. Others attempt to resolve interactions
as a 2D image synthesis problem and suffer from view and
temporal inconsistencies. In this work, we propose a 3D
compositional morphable model of eyeglasses that accu-
rately incorporates high-fidelity geometric and photometric
interaction effects. To support the large variation in eye-
glass topology efficiently, we employ a hybrid representa-
tion that combines surface geometry and a volumetric rep-
resentation. Unlike volumetric approaches, our model natu-
rally retains correspondences across glasses, and hence ex-
plicit modification of geometry, such as lens insertion and
frame deformation, is greatly simplified. In addition, our
model is relightable under point lights and natural illumi-
nation, supporting high-fidelity rendering of various frame
materials, including translucent plastic and metal within a
* Work done while Junxuan Li was an intern at Reality Labs Research.single morphable model. Importantly, our approach mod-
els global light transport effects, such as casting shadows
between faces and glasses. Our morphable model for eye-
glasses can also be fit to novel glasses via inverse rendering.
We compare our approach to state-of-the-art methods and
demonstrate significant quality improvements.
|
1. Introduction
Humans are social animals. How we dress and acces-
sorize is a key mode of self-expression and communication
in daily life [ 11]. As social media and gaming has expanded
social life into the online medium, virtual presentations of
users have become increasingly focal to social presence,
and with it, the demand for the digitization of clothes and
accessories. In this paper, we focus on modeling eyeglasses,
an everyday accessory for billions of people worldwide.
In particular, we argue that to achieve realism it is not
sufficient to model eyeglasses in isolation: their interactions
with the face have to be considered. Geometrically, glasses
and faces are not rigid, and they mutually deform one an-
other at the contact points. Thus, the shapes of eyeglasses
and faces cannot be determined independently. Similarly,
their appearance is coupled via global light transport, and
shadows as well as inter-reflections may appear and affect
the radiance. A computational approach to model these in-
teractions is therefore necessary to achieve photorealism.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12769
Photorealistic rendering of humans has been a focus of
computer graphics for over 50 years, and yet the realism
of avatars created by classical authoring tools still requires
extensive manual refinement to cross the uncanny valley.
Modern realtime graphics engines [ 10] support the compo-
sition of individual components (e.g., hair, clothing), but
the interaction between the face and other objects is by
necessity approximated with overly simplified physically-
inspired constraints or heuristics (e.g., “no interpenetra-
tions”). Thus, they do not faithfully reconstruct all geomet-
ric and photometric interactions present in the real world.
Another group of approaches aims to synthesize the
composition of glasses in the image domain [ 28,66,69]
by leveraging powerful 2D generative models [ 25]. While
these approaches can produce photorealistic images, anima-
tion results typically suffer from view and temporal incon-
sistencies due to the lack of 3D information.
Recently, neural rendering approaches [ 56] achieve pho-
torealistic rendering of human heads [ 14,17,35,36,48] and
general objects [ 40,44,60,70] in a 3D consistent manner.
These approaches are further extended to generative mod-
eling for faces [ 6] and glasses [ 39,64], such that a sin-
gle morphable model can span the shape and appearance
variation of each object category. However, in these ap-
proaches [ 6,39,64] interactions between objects are not
considered, leading to implausible object compositions.
While a recent work shows that unsupervised learning of a
3D compositional generative model from an image collec-
tion is possible [ 43], we observe that the lack of structural
prior about faces or glasses leads to suboptimal fidelity. In
addition, the aforementioned approaches are not relightable,
thus not allowing us to render glasses on faces in a novel il-
lumination.
In contrast to existing approaches, we aim at model-
ing the geometric and photometric interactions between
eyeglasses frames and faces in a data-driven manner from
image observations. To this end, we present MEGANE
(Morphable Eyeglass and Avatar Network), a morphable
and relightable eyeglass model that represents the shape
and appearance of eyeglasses frames and its interaction with
faces. To support variations in topology and rendering effi-
ciency, we employ a hybrid representation combining sur-
face geometry and a volumetric representation [ 37]. As
our hybrid representation offers explicit correspondences
across glasses, we can trivially deform its structure based
on head shapes. Most importantly, our model is conditioned
by a high-fidelity generative human head model [ 6], allow-
ing it to specialize deformation and appearance changes to
the wearer. Similarly, we propose glasses-conditioned de-
formation and appearance networks for the morphable face
model to incorporate the interaction effects caused by wear-
ing glasses. We also propose an analytical lens model that
produces photorealistic reflections and refractions for anyprescription and simplifies the capture task, enabling lens
insertion in a post-hoc manner.
To jointly render glasses and faces in novel illumina-
tions, we incorporate physics-inspired neural relighting into
our proposed generative modeling. The method infers out-
put radiance given view, point-light positions, visibility, and
specular reflection with multiple lobe sizes. The proposed
approach significantly improves generalization and sup-
ports subsurface scattering and reflections of various mate-
rials including translucent plastic and metal within a single
model. Parametric BRDF representations can not handle
such diverse materials, which exhibit significant transmis-
sive effects, and inferring their parameters for photorealistic
relighting remains challenging [ 41,74,77].
To evaluate our approach, we captured 25 subjects us-
ing a multi-view light-stage capture system similar to Bi et
al. [3]. Each subject was captured three times; once with-
out glasses, and another two times wearing a random se-
lection out of a set of 43 glasses. All glasses were cap-
tured without lenses. As a preprocess, we separately recon-
struct glasses geometry using a differentiable neural SDF
from multi-view images [ 60]. Our study shows that care-
fully designed regularization terms based on this precom-
puted glasses geometry significantly improves the fidelity
of the proposed model. We also compare our approach with
state-of-the-art generative eyeglasses models, demonstrat-
ing the efficacy of our representation as well as the pro-
posed joint modeling of interactions. We further show that
our morphable model can be fit to novel glasses via inverse
rendering and relight them in new illumination conditions.
In summary, the contributions of this work are:
• the first work that tackles the joint modeling of ge-
ometric and photometric interactions of glasses and
faces from dynamic multi-view image collections.
• a compositional generative model of eyeglasses that
represents topology varying shape and complex ap-
pearance of eyeglasses using a hybrid mesh-volumetric
representation.
• a physics-inspired neural relighting approach that sup-
ports global light transport effects of diverse materials
in a single model.
|
Liao_EMT-NASTransferring_Architectural_Knowledge_Between_Tasks_From_Different_Datasets_CVPR_2023
|
Abstract
The success of multi-task learning (MTL) can largely be
attributed to the shared representation of related tasks, al-
lowing the models to better generalise. In deep learning,
this is usually achieved by sharing a common neural net-
work architecture and jointly training the weights. How-
ever, the joint training of weighting parameters on mul-
tiple related tasks may lead to performance degradation,
known as negative transfer. To address this issue, this work
proposes an evolutionary multi-tasking neural architecture
search (EMT-NAS) algorithm to accelerate the search pro-
cess by transferring architectural knowledge across mul-
tiple related tasks. In EMT-NAS, unlike the traditional
MTL, the model for each task has a personalised network
architecture and its own weights, thus offering the ca-
pability of effectively alleviating negative transfer. A fit-
ness re-evaluation method is suggested to alleviate fluctu-
ations in performance evaluations resulting from param-
eter sharing and the mini-batch gradient descent training
method, thereby avoiding losing promising solutions during
the search process. To rigorously verify the performance of
EMT-NAS, the classification tasks used in the empirical as-
sessments are derived from different datasets, including the
CIFAR-10 and CIFAR-100, and four MedMNIST datasets.
Extensive comparative experiments on different numbers of
tasks demonstrate that EMT-NAS takes 8% and up to 40%
on CIFAR and MedMNIST, respectively, less time to find
competitive neural architectures than its single-task coun-
terparts.
|
1. Introduction
Many neural architecture search (NAS) algorithms [8,
16, 23, 56] have shown better performance on a specific
task than manually designed deep neural networks [11, 14,
*Corresponding author.
Code: https://github.com/PengLiao12/EMT-NAS
Figure 1. Conceptual differences between our work and two
main existing methodologies for two-task learning. (a) Existing
multi-task NAS typically handles multiple tasks such as semantic
segmentation and surface normal prediction on the same dataset,
where the loss function contains losses for multiple tasks, resulting
in a network architecture that shares a common set of weight pa-
rameters. (b) Model-based transfer learning trains a network archi-
tecture on a source dataset and then transfers it to a target dataset
by fine-tuning the weights using a separate loss function. (c) EMT-
NAS simultaneously optimises multiple classification tasks from
different datasets using a separate loss function for each task, re-
sulting in an individual network architecture and corresponding
weights for each task. EMT-NAS aims to optimise each task sepa-
rately while sharing the knowledge of good network architectures
to facilitate the optimisation of each task. Best viewed in colour.
35, 51]. However, when the task (or dataset) changes, the
NAS algorithm needs to be run again to find a new optimal
network architecture for the new task, which is typical for
single-task learning (STL).
By contrast, it has been shown that simultaneously learn-
ing multiple related tasks, known as multi-task learning
(MTL) is beneficial [3, 52]. One class of MTL focuses
on designing or training a single network to jointly solve
multiple tasks formed by scene understanding (the same
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3643
dataset) [38, 46], such as the NYU v2 dataset [37], the
CityScapes dataset [4], and the Taskonomy dataset [48].
These tasks consist of semantic segmentation, surface nor-
mal prediction, depth prediction, keypoint detection and
edge detection, which are known as multiple tasks that arise
naturally [3]. Another typical scenario of MTL is inspired
from human learning, where humans often transfer knowl-
edge from one task to another related task, for example, the
skills of playing squash and tennis could help each other
to improve [52]. In this case, tasks may come from dif-
ferent datasets. More often than not, a pair of tasks from
different datasets have a lower relatedness score than those
from the same dataset [18]. This can be clearly shown if we
analyse the task relatedness [1] between classification tasks
from four medical datasets using the representation simi-
larity analysis (RSA) [7]. The results of our analysis are
presented in Fig. 2, from which we can see that the relat-
edness score becomes especially low when the datasets are
from different data modalities, implying that MTL on tasks
from different datasets is more challenging.
MTL is able to improve the performance of learning mul-
tiple related tasks by means of sharing common knowledge
between tasks, including the network architecture and the
weights of the model. Existing MTL designs one architec-
ture for multiple tasks, although only a subset of that net-
work architecture is practically used for each task. In other
words, there are shares and differences between these sub-
sets of model architectures [33]. In sharing the weights, if
one or more tasks have a dominating influence in training
the network weights, the performance of MTL may deteri-
orate on some of the dominated tasks, which is called neg-
ative transfer [40]. Therefore, this work aims to improve
the performance of each task by finding a personalised net-
work architecture for each task, thereby alleviating negative
transfer resulting from the jointly trained weights.
Inspired by the successful research on multi-factorial
evolutionary algorithms (MFEAs) [9], a recently proposed
approach to knowledge transfer in evolutionary optimisa-
tion, this work designs an algorithm to separately search
for a personalised network architecture and corresponding
weight parameters for each task in the same search space.
This way, knowledge transfer across different tasks can be
achieved through crossover between these architectures to
accelerate the search process on each task, provided that
there are similarities between the multiple learning tasks.
Contributions: First, we propose an evolutionary multi-
tasking NAS algorithm (EMT-NAS) for finding person-
alised network architectures for different tasks (datasets) in
the same search space. Architectural knowledge transfer
is achieved by optimising the supernet parameters of each
task separately and recommending neural network archi-
tectures with good performance on their own task. Sec-
ond, block-based crossover and bit-based mutation opera-
Figure 2. Task similarity matrix. The relatedness scores be-
tween four medical tasks are computed using ResNet-50 as
a feature extractor. PathMNIST [44] is the tissue classifica-
tion based on colorectal cancer histology slides, while Organ-
MNIST {Axial,Coronal,Sagittal }[44] are organ classifications
based on 3 types of 2D images formed by 3D computed to-
mography (CT) images. We observe that the relatedness be-
tween PathMNIST and OrganMNIST {Axial,Coronal,Sagittal },
respectively, are all smaller than that between OrganM-
NIST{Axial,Coronal,Sagittal }.
tors are designed to accommodate the transformation from
a continuous space to a discrete space. Finally, a fit-
ness re-evaluation method is introduced to alleviate fitness
fluctuations over the generations resulting from parameter
sharing and the mini-batch gradient descent based train-
ing to keep promising solutions. We present thorough
experimental evaluations on different datasets, including
medical classification benchmarks (PathMNIST, OrganM-
NIST{Axial,Coronal,Sagittal }) and popular image classi-
fication benchmarks (CIFAR-10, CIFAR-100, ImageNet),
demonstrating the efficacy and the value of each compo-
nent of EMT-NAS. Our results demonstrate that, unlike tra-
ditional MTL, it is feasible to search for personalised net-
work architectures for multiple tasks through architectural
knowledge transfer, obtaining better performance and re-
quiring less search time compared with single-task learning.
|
Khan_Q_How_To_Specialize_Large_Vision-Language_Models_to_Data-Scarce_VQA_CVPR_2023
|
Abstract
Finetuning a large vision language model (VLM) on a
target dataset after large scale pretraining is a dominant
paradigm in visual question answering (VQA). Datasets for
specialized tasks such as knowledge-based VQA or VQA
in non natural-image domains are orders of magnitude
smaller than those for general-purpose VQA. While col-
lecting additional labels for specialized tasks or domains
can be challenging, unlabeled images are often available.
We introduce SelTDA ( Self-Taught DataAugmentation),
a strategy for finetuning large VLMs on small-scale VQA
datasets. SelTDA uses the VLM and target dataset to build a
teacher model that can generate question-answer pseudola-
bels directly conditioned on an image alone, allowing us to
pseudolabel unlabeled images. SelTDA then finetunes the
initial VLM on the original dataset augmented with freshly
pseudolabeled images. We describe a series of experiments
showing that our self-taught data augmentation increases ro-
bustness to adversarially searched questions, counterfactual
examples and rephrasings, improves domain generalization,
and results in greater retention of numerical reasoning skills.
The proposed strategy requires no additional annotations or
architectural modifications, and is compatible with any mod-
ern encoder-decoder multimodal transformer. Code avail-
able at https://github.com/codezakh/SelTDA .
|
1. Introduction
Large, pretrained vision language foundation models [3,
20, 25, 26, 35, 49] are approaching human-level performance
on visual question answering (VQA) [26, 50 –52, 54, 62], as
measured by the standard VQAv2 [13] benchmark. Yet on
more complex VQA tasks [37, 43] there is a larger gap be-
tween humans and machines. One difficulty is the small
scale of datasets for complex VQA tasks or those in domains
beyond natural images. The first solution to deal with the
∗work done while at NEC Labs America
Figure 1. SelTDA expands the self-training paradigm to VQA. By
self-generating supervision (orange line) for an image Iwithout
needing extra annotations, we can augment a target dataset with
new images and their pseudo-questions and answers (Q,A).
data scarcity is to employ transfer learning from a larger
VQA dataset (e.g. VQAv2) to the smaller, specialized VQA
dataset. However weaknesses of VQA models such as lack
of consistency [44], weakness to adversarially searched ques-
tions [27] and tendency to cheat by learning shortcuts [8]
can be exacerbated when fine-tuning on small datasets.
Collecting annotations to expand a dataset for knowledge-
intensive tasks or specialized domains is often prohibitively
expensive. However, unlabeled images are cheap and often
available. How can we exploit unlabeled images for specific
visual question answering tasks? One possibility is to gen-
erate new question+answer pairs for the unlabeled images,
and use them during training. However, existing methods for
visual question generation require images with annotations —
either ground truth captions [2,4], or bounding boxes [21,48].
Even if these annotations were to be acquired, they induce a
limited set of possible questions; they are limited to objects
and concepts included in the acquired annotation, which are
in turn limited by the finite label space of pretrained object
detectors and the information disparity between a caption
and an image (an image usually contains much more content
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15005
Figure 2. Motivating experiment. We sample increasingly diverse captions from BLIP [26], convert them to questions, and pose the questions
to BLIP after finetuning on VQAv2. As caption diversity increases, self-agreement decreases (right panel). Despite the diversity, many
captions remain correct (middle panel), suggesting that the VLM has knowledge that is not exhausted by task-specific finetuning.
than a short caption can describe).
Motivating Experiment : In Fig 2, we show that a large
vision-language model (VLM) pretrained on web-scale data
contains knowledge that can be drawn out with image-
conditional text generation, but which the model cannot
verify when posed as a visual question-answering task. We
prompt the BLIP [26] VLM (pretrained on 129M image-text
pairs) to caption 1000 images from the CC3M [45] dataset
starting with the phrase “ this is a ”. We convert each
caption into a boolean question where the correct answer is
“yes” by inserting the caption into the template is this
a <caption>? Next, we ask a BLIP VLM finetuned on
the VQAv2 dataset [13] to choose between “yes” and “no”
for each caption turned into a question. Surprisingly, the
VQA-finetuned BLIP answers “no” to at least 5%of the
questions, increasing to 15% as the diversity of captions
increases (adjusted by top- pparameter in nucleus sampling).
This suggests the possibility that the VLM has knowledge it
cannot exploit when answering questions, but is accessible
when directly generating text conditioned on an image.
Approach : To exploit unlabeled images for VQA, we pro-
pose SelTDA , a three-stage framework for Self-Taught Data
Augmentation (Fig 1 bottom panel). We adapt the paradigm
of self-training used in object detection [29, 65] and image
classification [41, 60] for VQA. In classification / detection,
the task of labeling an image is identical to prediction, and
the teacher and student optimize identically structured ob-
jectives. In VQA self-training, the student and teacher tasks
are different. A teacher must pose and answer a question
given an image, while the student provides an answers given
a question and image. To handle this, we first cast the task
of the teacher as a direct image-to-text generation task, and
introduce a teacher model by updating the weights of the
VLM to learn an image-conditional visual question gener-
ation model VQG IC. Next, we use VQG ICas a teacher topseudolabel unlabeled images by sampling questions and
answers from VQG ICwith stochastic decoding. Finally, we
augment the original VQA dataset with the newly labeled
image-question-answer pairs, and finetune the VLM for vi-
sual question answering on the augmented VQA dataset.
Benefits :SelTDA allows us generate synthetic training data
by approximating the distribution P(Q, A|I)of the target
VQA task, where Q, A, I represents a question, answer, and
image respectively. One benefit is that the synthetic data in-
creases the number of training pairs available for finetuning,
which effects an increase in raw performance. A second ben-
efit is an increase in the diversity of questions and answers
due to the introduction of new images and the stochastic
nature of the text decoding, which results in increased ro-
bustness and domain generalization. A third benefit is the
distillation of knowledge from pretraining and transfer learn-
ing into the synthetic training data, which can teach new
skills (e.g. domain generalization) or prevent the forget-
ting of specific skills (e.g. numerical reasoning). Finally
SelTDA is architecture-agnostic given a vision-language
model capable of image-conditional text-generation. Our
contributions can be summarized as follows:
1.We introduce SelTDA , a variant of the self-training
paradigm that is designed for VQA and large gener-
ative pretrained VLMs.
2.We propose treating visual question generation as a di-
rect image-to-text task by leveraging the autoregressive
decoder of a large, pretrained VLM, enabling us to gen-
erate questions and answers from an unlabeled image
with no auxillary annotations needed.
3.We show that a large VLM trained with the proposed
SelTDA gains increased robustness, domain general-
ization, numerical reasoning, and performance when
finetuning on small-scale VQA datasets.
15006
|
Kumar_MethaneMapper_Spectral_Absorption_Aware_Hyperspectral_Transformer_for_Methane_Detection_CVPR_2023
|
Abstract
Methane (CH 4) is the chief contributor to global cli-
mate change. Recent Airborne Visible-Infrared Imaging
Spectrometer-Next Generation (AVIRIS-NG) has been very
useful in quantitative mapping of methane emissions. Ex-
isting methods for analyzing this data are sensitive to local
terrain conditions, often require manual inspection from do-
main experts, prone to significant error and hence are not
scalable. To address these challenges, we propose a novel
end-to-end spectral absorption wavelength aware trans-
former network, MethaneMapper, to detect and quantify the
emissions. MethaneMapper introduces two novel modules
that help to locate the most relevant methane plume regions
in the spectral domain and uses them to localize these ac-
curately. Thorough evaluation shows that MethaneMapper
achieves 0.63mAP in detection and reduces the model size
(by5×) compared to the current state of the art. In ad-
dition, we also introduce a large-scale dataset of methane
plume segmentation mask for over 1200 AVIRIS-NG flight
lines from 2015-2022. It contains over 4000 methane plume
sites. Our dataset will provide researchers the opportunity
to develop and advance new methods for tackling this chal-
lenging green-house gas detection problem with significant
broader social impact. Dataset and source code link1.
|
1. Introduction
We consider the problem of detecting and localizing
methane (CH 4) plumes from hyperspectral imaging data.
Detecting and localizing potential CH 4hot spots is a nec-
essary first step in combating global warming due to green-
house gas emissions. Methane gas is estimated to contribute
20% of global warming induced by greenhouse gasses [24]
with a Global Warming Potential (GWP) 86 times higher
than carbon dioxide (CO 2) in a 20 year period [33]. To
1https://github.com/UCSB-VRL/MethaneMapper-Spectral-Absorption-
aware-Hyperspectral-Transformer-for-Methane-Detectionput into perspective, the amount of environmental damage
that CO 2can do in 100 years, CH 4can do in 1.2years.
Hence it is critical to monitor and curb the CH 4emissions.
While CH 4emission has many sources, of particular inter-
est are those from oil and natural gas industries. According
to the United States Environmental Protection Agency re-
port, CH 4emissions from these industries accounts to 84
million tons per year [18]. These CH 4emissions emanate
from specific locations, mainly from pipeline leakages, stor-
age tank leak or leakage from oil extraction point.
Current efforts to detect these sources mostly depend on
aerial imagery. The Jet Propulsion Laboratory (JPL) has
conducted thousands of aerial surveys in the last decade to
collect data using an airborne sensor A VIRIS-NG [21]. Sev-
eral methods have been proposed to detect potential emis-
sion sites from such imagery, for example, see [8, 9, 35, 38,
40, 41]. However, these methods are in general very sensi-
tive to background context and land-cover types, resulting
in a large number of false positives that often require sig-
nificant domain expert time to correct the detections. The
primary reason is that these pixel-based methods are solely
dependent on spectral correlations for detection. Spatial in-
formation can be very effective in reducing these false pos-
itives as CH 4plumes exhibit a plume-like structure mor-
phology. There has been recent efforts in utilizing spatial
correlation using deep learning methods [22, 29], however,
these works don’t leverage spectral properties to filter out
confusers. For example, methane has similar spectral prop-
erties as white-painted commercial roofs or paved surfaces
such as airport asphalts [1]. This paper presents a novel
deep-network based solution to minimize the effects of such
confusers in accurately localizing methane plumes.
Our proposed approach, referred to as the MethaneMap-
per (MM), adapts the DETR [4], a transformer model that
combines the spectral and spatial correlations in the imag-
ing data to generate a map of potential methane (CH 4)
plume candidates. These candidates reduce the search space
for a hyperspectral decoder to detect CH 4plumes and re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17609
move potential confusers. MM is a light-weight end-to-end
single-stage CH 4detector and introduces two novel mod-
ules: a Spectral Feature Generator and a Query Refiner .
The former generates spectral features from a linear filter
that maximizes the CH 4-to-noise ratio in the presence of
additive background noise, while the latter integrates these
features for decoding.
A major bottle neck for development of CH 4detection
methods is the limited availability of public training data.
To address this, another significant contribution of this re-
search is the introduction of a new Methane Hot Spots
(MHS) dataset, largest of its kind available for computer vi-
sion researchers. MHS is curated by systematically collect-
ing information from different publicly available datasets
(airborne sensor [6], Non-profits [3, 31] and satellites [34])
and generating the annotations as described in Section 4.1.1.
This curated dataset contains methane segmentation masks
for over 1200 A VIRIS-NG flight lines from years 2015 to
2022. Each flight line contains anywhere from 3-4 CH 4
plume sites for a total of 4000 in the MHS dataset.
Our contributions can be summarized as follows:
1. We introduce a novel single-stage end-to-end approach
for methane plume detection using a hyperspectral
transformer. The two modules, Spectral Feature Gen-
erator andQuery Refiner , work together to improve
upon the traditional transformer design and enable lo-
calization of potential methane hot spots in the hyper-
spectral images using a Spectral-Aware Linear Filter
and refine the query representation for better decoding.
2. A new Spectral Linear Filter (SLF) improves upon tra-
ditional linear filters by strategically picking correlated
pixels in spectal domain to better whiten background
distribution and amplify methane signal.
3. A new benchmark dataset, MHS, provides the largest
(∼35×) publicly available dataset of annotated
A VIRIS-NG flight lines from years 2015-2022.
|
Kim_On_the_Stability-Plasticity_Dilemma_of_Class-Incremental_Learning_CVPR_2023
|
Abstract
A primary goal of class-incremental learning is to strike
a balance between stability and plasticity, where models
should be both stable enough to retain knowledge learned
from previously seen classes, and plastic enough to learn
concepts from new classes. While previous works demon-
strate strong performance on class-incremental bench-
marks, it is not clear whether their success comes from
the models being stable, plastic, or a mixture of both.
This paper aims to shed light on how effectively recent
class-incremental learning algorithms address the stability-
plasticity trade-off. We establish analytical tools that mea-
sure the stability and plasticity of feature representations,
and employ such tools to investigate models trained with
various algorithms on large-scale class-incremental bench-
marks. Surprisingly, we find that the majority of class-
incremental learning algorithms heavily favor stability over
plasticity, to the extent that the feature extractor of a model
trained on the initial set of classes is no less effective than
that of the final incremental model. Our observations not
only inspire two simple algorithms that highlight the im-
portance of feature representation analysis, but also sug-
gest that class-incremental learning approaches, in general,
should strive for better feature representation learning.
|
1. Introduction
Despite the unprecedented success of deep learning [19,
23,27,30], most deep neural networks have static use cases.
However, real-world problems often require adaptivity to
incoming data [17], changes in training environments, and
domain shifts [2, 7, 15]. Thus, researchers have been ac-
tively working on model adaptation techniques, and have
proposed various continual learning approaches so far.
A na¨ıve approach for continual learning is to simply fine-
tune a model. However, such a solution is rather ineffec-
tive due to a phenomenon known as catastrophic forget-
ting[13], which arises as a result of high plasticity of neural
networks, i.e. parameters important for the old tasks are up-
dated to better fit the new data. On the flip side, enforcingmodel stability introduces its own set of limitations, mainly
the lack of adaptivity to new data. Thus, we encounter the
stability-plasticity dilemma : how can we balance stability
and plasticity such that the model is able to learn new con-
cepts while retaining old ones? Finding an optimal balance
between these two opposing forces is a core challenge of
continual learning research, and has been the main focus of
many previous works [6, 12, 16, 20, 21, 26, 31].
We conduct an in-depth study of recent works in contin-
ual learning, specifically concentrating on class-incremental
learning (CIL)—a subfield of continual learning—where
new sets of classes arrive in an online fashion. We are moti-
vated by the lack of systematic analyses in the field of CIL,
which hampers our understanding of how effectively the ex-
isting algorithms balance stability and plasticity. Moreover,
works that do perform analyses usually focus on the classi-
fier,e.g., classifier bias [1, 10], rather than the intermediate
feature representations. However, investigating the stability
and plasticity in the feature level is just as important, if not
more, because utilizing the model’s full capacity to learn
robust representations is critical to maximizing the poten-
tial of CIL algorithms.
To measure plasticity, we retrain the classification layer
of CIL models at various incremental stages and study how
effectively their feature extractors have learnt new con-
cepts. We then investigate stability by measuring feature
similarity with Centered Kernel Alignment (CKA) [3, 14]
and by visualizing the feature distribution shift using t-
SNE [29]. Suprisingly, and possibly concerningly, our anal-
yses show that the majority of CIL models accumulate little
new knowledge in their feature representations across incre-
mental stages. In fact, most of the analyzed CIL algorithms
seem to alleviate catastrophic forgetting by heavily over-
looking model plasticity in favor of high stability. Finally,
we introduce two simple algorithms based on our observa-
tions. The first is an extension of Dynamically Expandable
Representations (DER) [31], which demonstrates how our
analyses may be used to improve the efficiency and efficacy
of C
|
Liao_Adaptive_Channel_Sparsity_for_Federated_Learning_Under_System_Heterogeneity_CVPR_2023
|
Abstract
Owing to the non-i.i.d. nature of client data, channel neu-
rons in federated-learned models may specialize to distinct
features for different clients. Yet, existing channel-sparse
federated learning (FL) algorithms prescribe fixed sparsity
strategies for client models, and may thus prevent clients
from training channel neurons collaboratively. To minimize
the impact of sparsity on FL convergence, we propose Flado
to improve the alignment of client model update trajectories
by tailoring the sparsities of individual neurons in each client.
Empirical results show that while other sparse methods are
surprisingly impactful to convergence, Flado can not only at-
tain the highest task accuracies with unlimited budget across
a range of datasets, but also significantly reduce the amount
of floating-point operations (FLOPs) required for training
more than by 10⇥under the same communications budget,
and push the Pareto frontier of communication/computation
trade-off notably further than competing FL algorithms.
|
1. Introduction
In the light of the importance of personal data and recent
privacy regulations, e.g. the General Data Protection Regula-
tion (GDPR) of the European Union [ 28,31,34], there is now
a great amount of risk, responsibility [ 7,10] and technical
challenges for securing private data centrally [ 30]; it is often
impractical to upload, store and use data on central servers.
To this end, federated learning (FL) [ 21,24] enables multiple
edge devices to learn a global shared model collaboratively
in a communication-efficient way without collecting their
local training data. Federated averaging (FedAvg) [ 24] and
*These authors contributed equally to this work.
†Corresponding author.subsequent FL algorithms [ 16,22] can notably reduce the
burden of data transmission.
These FL algorithms, however, neglected that the clients
exhibit a high degree of system heterogeneity . That is, the
clients may occupy a wide spectrum of different hardware
training capabilities [ 12]. Yet, dropping stragglers ( i.e., the
slowest clients) inherently increases statistical heterogeneity,
which causes a negative impact on convergence [ 22]. Moti-
vated by this, recent FL methods, e.g., Federated Dropout [ 5],
and FjORD [ 13], thus prescribe fixed channel sparsity strate-
gies for each client, depending on their corresponding com-
putational capabilities (Figure 1a).
Yet, owing to the non-i.i.d. nature of client data, a fixed
sparsity strategy is suboptimal, as neurons may specialize to
distinct features [ 3,37] for different clients (Figure 2). Intu-
itively, clients may waste computational effort on neurons
that lead to conflicting update trajectories “canceling out”
each other. Since neuron training is heavily dependent on
the clients’ data, it presents us an opportunity: can we adapt
neuron sparsities such that clients can focus their computa-
tional efforts to collaborate more effectively? Inspired by
the observation above, an optimal model sparsity strategy
should thus adapt to clients’ model training, and make clients
collaborate on similar model update trajectories by making
such neurons denser, while sparsifying neurons that conflict
in update directions.
Adaptive channel sparsity for FL clients is, however, a
nontrivial endeavor. First, na ¨ıvely pruning channel neurons
i.e., setting to 0, would cause them to make no contribu-
tion in training after pruning. It is thus difficult to decide if
and when a pruned channel should be recovered. Second,
as neurons tend to extract distinct features from data, data
heterogeneous clients thus specialize to training different
neurons. Third, prescribing sparsities to channels is subopti-
mal, it may be desirable to allow certain clients to collaborate
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20432
(a) FjORD prescribes fixed sparsity.
(b) Adaptive sparsity with Flado.
Figure 1. Comparing existing sparse FL algorithms ( e.g.,
FjORD [ 13]) with Flado. (a)FjORD trains models with fixed
channel sparsities for clients with different capabilities. (b)
Flado adapts channel sparsities with underlying training trajec-
tories and capabilities for each training round. As an example
we use 6 clients with different computational capabilities and a con-
volutional layer with 8 input and 8 output channels. Each colored
bullet “ •” denotes a filter computation, and lighter color ones are
more likely to skip, and “ ⇣ ⌘” arrows indicate communications.
on particular neurons, but the neurons may not exist in the
sparse models. In other words, the optimal sparsification
strategy must adapt different sparsities for each neuron in
each client, depending on the training trajectories.
To this end, this paper introduces Flado, a method that
optimizes channel activation probabilities to sparsify client
models with trajectory alignment towards the global trajec-
tory. The advantage of Flado is two-fold. First, coarse-
grained channel dropouts can be easily implemented and
leveraged by existing models and hardware devices to ac-
celerate client training. Second, a light-weight trajectory-
alignment algorithm optimizes the sparsity of each channel
in each client with very low overhead for the clients, and it
can reap immense computational benefits. We summarize
the contributions of this paper as follows:
•In contrast to existing fixed sparsity strategies for FL,
Flado propose to further optimize channel activation
probabilities to accelerate sparse training for each chan-
nel in each client.
•As evinced by our experiments, it can drastically re-
duce the amount of floating-point operations (FLOPs)
required for training by more than 10⇥under the same
Figure 2.Clients sharing similar data distribution also sharesimilar trained parameters. Simply aggregating the conflictingupdate trajectories from multiple clients can result in wastefulcomputations as the trajectories are mostly orthogonal to eachother.As an example, we trained the same LeNet-5 model using 10clients with each pair of clients receiving only same-class imagesfrom Fashion-MNIST to simulate concept disparity. Each circledenotes a channel neuron of the 1thlayer, and it contains the updatemagnitudes for all its parameters (arranged radially) after 1 roundof training. The 5 rows represent the first 5 channel neurons, andeach column is a different client (grouped in pairs).communications budget, and enjoys a much improvedcommunication/computation Pareto frontiers than com-peting FL approaches when training under data andsystem heterogeneity.•Flado widens its lead in convergence rate when highdegrees of heterogeneity are present in both data distri-butions and system capabilities. Furthermore, it scaleswell to larger models and fractional client participation.
|
Kim_Open-Set_Representation_Learning_Through_Combinatorial_Embedding_CVPR_2023
|
Abstract
Visual recognition tasks are often limited to dealing with
a small subset of classes simply because the labels for the
remaining classes are unavailable. We are interested in
identifying novel concepts in a dataset through represen-
tation learning based on both labeled and unlabeled ex-
amples, and extending the horizon of recognition to both
known and novel classes. To address this challenging task,
we propose a combinatorial learning approach, which natu-
rally clusters the examples in unseen classes using the com-
positional knowledge given by multiple supervised meta-
classifiers on heterogeneous label spaces. The representa-
tions given by the combinatorial embedding are made more
robust by unsupervised pairwise relation learning. The
proposed algorithm discovers novel concepts via a joint
optimization for enhancing the discrimitiveness of unseen
classes as well as learning the representations of known
classes generalizable to novel ones. Our extensive exper-
iments demonstrate remarkable performance gains by the
proposed approach on public datasets for image retrieval
and image categorization with novel class discovery.
|
1. Introduction
Despite the remarkable success of machine learning fu-
eled by deep neural networks, existing frameworks still
have critical limitations in an open-world setting, where
some categories are not defined a priori and the labels for
some classes are missing. Although there have been a grow-
ing number of works that identify new classes in unlabeled
data given a set of labeled examples [4, 5, 15–18], they of-
ten assume that all the unlabeled examples belong to unseen
classes and/or the number of novel classes is known in ad-
vance, which makes their problem settings unrealistic.
To address the limitations, this paper introduces an al-
gorithm applicable to a more realistic setting. We aim to
discover and learn the representations of unseen categories
without any prior information or supervision about novel
classes, where unlabeled data may contain examples in both
seen and unseen classes. This task requires the model to beable to effectively identify unseen classes while preserving
the information of previously seen classes. Our problem
setting is more challenging than the case where the unla-
beled data only consist of unseen classes because we have
to solve an additional problem, predicting the membership
of unlabeled examples between seen and unseen classes.
We propose a representation learning approach based on
the concept of combinatorial classification [36], where the
examples in unseen categories are identified by the com-
position of multiple meta-classifiers. Figure 1 illustrates
the main idea of our combinatorial embedding framework,
which forms partitions for novel classes via a combination
of multiple classifiers for the meta-classes involving several
constituent base classes. Images in the same meta-class po-
tentially have common attributes that are helpful for knowl-
edge transfer to novel classes, and we learn the representa-
tions of the images by the proposed combinatorial embed-
ding. The learned representations via the combinatorial em-
bedding become even stronger by unsupervised pairwise re-
lation learning, which is effective to identify novel classes.
Our main contributions are summarized as follows.
We propose a novel combinatorial learning framework,
which embeds the examples in both seen and novel
classes effectively by the composition of the knowl-
edge learned from multiple heterogeneous meta-class
classifiers.
We introduce an unsupervised learning approach to de-
fine pairwise relations, especially semantic structure
between labeled and unlabeled examples, which fur-
ther improves the quality of the representations given
by combinatorial embedding.
We demonstrate the outstanding performance of our
model in the presence of novel classes through exten-
sive evaluations on image retrieval and image catego-
rization with novel class discovery benchmarks.
In the rest of this paper, we first review related works in
Section 2 and discusses our main algorithm in Section 3.
Section 4 presents our experimental results and Section 5
concludes this paper.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19744
(a) One meta-class set
(b) Two meta-class sets
(c) Three meta-class sets
Figure 1. Conceptual illustration of decision boundaries (black solid lines) given by combinatorial classification with three seen classes,
where three binary meta-classifiers are added one-by-one from (a) to (c). Unlike the standard classifier that creates decision boundaries for
seen classes only, the combinatorial classification based on multiple coarse-grained classifiers creates and reserves partitions, which are
distinct from those of seen classes, potentially corresponding to novel concepts.
|
Liu_Robust_Dynamic_Radiance_Fields_CVPR_2023
|
Abstract
Dynamic radiance field reconstruction methods aim to
model the time-varying structure and appearance of a dy-
namic scene. Existing methods, however, assume that ac-
curate camera poses can be reliably estimated by Structure
from Motion (SfM) algorithms. These methods, thus, are un-
reliable as SfM algorithms often fail or produce erroneous
poses on challenging videos with highly dynamic objects,
poorly textured surfaces, and rotating camera motion. We
address this robustness issue by jointly estimating the static
and dynamic radiance fields along with the camera param-
eters (poses and focal length). We demonstrate the robust-
ness of our approach via extensive quantitative and qualita-
tive experiments. Our results show favorable performance
over the state-of-the-art dynamic view synthesis methods.
|
1. Introduction
Videos capture and preserve memorable moments of our
lives. However, when watching regular videos, viewers ob-
*This work was done while Yu-Lun and Andreas were interns at Meta.serve the scene from fixed viewpoints and cannot interac-
tively navigate the scene afterward. Dynamic view synthe-
sis techniques aim to create photorealistic novel views of
dynamic scenes from arbitrary camera angles and points
of view. These systems are essential for innovative ap-
plications such as video stabilization [33, 42], virtual real-
ity [7, 15], and view interpolation [13, 85], which enable
free-viewpoint videos and let users interact with the video
sequence. It facilitates downstream applications like virtual
reality, virtual 3D teleportation, and 3D replays of live pro-
fessional sports events.
Dynamic view synthesis systems typically rely on ex-
pensive and laborious setups, such as fixed multi-camera
capture rigs [7, 10, 15, 50, 85], which require simultaneous
capture from multiple cameras. However, recent advance-
ments have enabled the generation of dynamic novel views
from a single stereo or RGB camera, previously limited to
human performance capture [16, 28] or small animals [65].
While some methods can handle unstructured video in-
put [1, 3], they typically require precise camera poses es-
timated via SfM systems. Nonetheless, there have been
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13
Table 1. Categorization of view synthesis methods.
Known camera poses Unknown camera posesStatic
sceneNeRF [44], SVS [59], NeRF++ [82],
Mip-NeRF [4], Mip-NeRF 360 [5], DirectV oxGO [68],
Plenoxels [23], Instant-ngp [45], TensoRF [12]NeRF - - [73], BARF [40],
SC-NeRF [31],
NeRF-SLAM [60]Dynamic
sceneNV [43], D-NeRF [56], NR-NeRF [71],
NSFF [39], DynamicNeRF [24], Nerfies [52],
HyperNeRF [53], TiNeuV ox [20], T-NeRF [25]Ours
many recent dynamic view synthesis methods for unstruc-
tured videos [24, 25, 39, 52, 53, 56, 71, 76] and new methods
based on deformable fields [20]. However, these techniques
require precise camera poses typically estimated via SfM
systems such as COLMAP [62] (bottom left of Table 1).
However, SfM systems are not robust to many issues,
such as noisy images from low-light conditions, motion blur
caused by users, or dynamic objects in the scene, such as
people, cars, and animals. The robustness problem of the
SfM systems causes the existing dynamic view synthesis
methods to be fragile and impractical for many challenging
videos. Recently, several NeRF-based methods [31, 40, 60,
73] have proposed jointly optimizing the camera poses with
the scene geometry. Nevertheless, these methods can only
handle strictly static scenes (top right of Table 1).
We introduce RoDynRF, an algorithm for reconstructing
dynamic radiance fields from casual videos. Unlike exist-
ing approaches, we do not require accurate camera poses
as input. Our method optimizes camera poses and two ra-
diance fields, modeling static and dynamic elements. Our
approach includes a coarse-to-fine strategy and epipolar ge-
ometry to exclude moving pixels, deformation fields, time-
dependent appearance models, and regularization losses for
improved consistency. We evaluate the algorithm on multi-
ple datasets, including Sintel [9], Dynamic View Synthe-
sis [79], iPhone [25], and DA VIS [55], and show visual
comparisons with existing methods.
We summarize our core contributions as follows:
• We present a space-time synthesis algorithm from a
dynamic monocular video that does notrequire known
camera poses and camera intrinsics as input.
• Our proposed careful architecture designs and auxil-
iary losses improve the robustness of camera pose es-
timation and dynamic radiance field reconstruction.
• Quantitative and qualitative evaluations demonstrate
the robustness of our method over other state-of-the-
art methods on several challenging datasets that typical
SfM systems fail to estimate camera poses.
|
Liu_Semantic_Ray_Learning_a_Generalizable_Semantic_Field_With_Cross-Reprojection_Attention_CVPR_2023
|
Abstract
In this paper, we aim to learn a semantic radiance field
from multiple scenes that is accurate, efficient and gener-
alizable. While most existing NeRFs target at the tasks of
neural scene rendering, image synthesis and multi-view re-
construction, there are a few attempts such as Semantic-
†Corresponding author.NeRF that explore to learn high-level semantic understand-
ing with the NeRF structure. However, Semantic-NeRF si-
multaneously learns color and semantic label from a single
ray with multiple heads, where the single ray fails to pro-
vide rich semantic information. As a result, Semantic NeRF
relies on positional encoding and needs to train one specific
model for each scene. To address this, we propose Semantic
Ray (S-Ray) to fully exploit semantic information along the
ray direction from its multi-view reprojections. As directly
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17386
performing dense attention over multi-view reprojected rays
would suffer from heavy computational cost, we design
a Cross-Reprojection Attention module with consecutive
intra-view radial and cross-view sparse attentions, which
decomposes contextual information along reprojected rays
and cross multiple views and then collects dense connec-
tions by stacking the modules. Experiments show that our
S-Ray is able to learn from multiple scenes, and it presents
strong generalization ability to adapt to unseen scenes.
Project page: https://liuff19.github.io/S-Ray/.
|
1. Introduction
Recently, Neural Radiance Field (NeRF) [32], a new
novel view synthesis method with implicit representation,
has taken the field of computer vision by storm [11]. NeRF
and its variants [2, 32, 57, 59] adopt multi-layer perceptrons
(MLPs) to learn continuous 3D representations and utilize
multi-view images to render unseen views with fine-grained
details. NeRF has shown state-of-the-art visual quality, pro-
duced impressive demonstrations, and inspired many subse-
quent works [4, 19, 20, 53, 57].
While the conventional NeRFs have achieved great suc-
cess in low- and middle-level vision tasks such as neural
scene rendering, image synthesis, and multi-view recon-
struction [4,12,27,35,36,42,50], it is interesting to explore
their more possibilities in high-level vision tasks and ap-
plications. Learning high-level semantic information from
3D scenes is a fundamental task of computer vision with
a wide range of applications [10, 13, 15, 33]. For example,
a comprehensive semantic understanding of scenes enables
intelligent agents to plan context-sensitive actions in their
environments. One notable attempt to learn interpretable se-
mantic understanding with the NeRF structure is Semantic-
NeRF [61], which regresses a 3D-point semantic class to-
gether with radiance and density. Semantic-NeRF shows
the potential of NeRF in various high-level tasks, such as
scene-labeling and novel semantic view synthesis.
However, Semantic-NeRF follows the vanilla NeRF by
estimating the semantic label from a single ray with a new
semantic head. While this operation is reasonable to learn
low-level information including color and density, a single
ray fails to provide rich semantic patterns – we can tell the
color from observing a single pixel, but not its semantic la-
bel. To deal with this, Semantic-NeRF heavily relies on po-
sitional encoding to learn semantic features, which is prone
to overfit the current scene and only applicable to novel
views within the same scene [51]. As a result, Semantic-
NeRF has to train one model from scratch for every scene
independently or provides very limited novel scene general-
ization by utilizing other pretrained models to infer 2D seg-
mentation maps as training signals for unseen scenes. This
significantly limits the range of applications in real-worldscenarios.
In this paper, we propose a neural semantic represen-
tation called Semantic Ray (S-Ray) to build a generaliz-
able semantic field, which is able to learn from multiple
scenes and directly infer semantics on novel viewpoints
across novel scenes as shown in Figure 1. As each view
provides specific high-level information for each ray re-
garding of viewpoints, occlusions, etc., we design a Cross-
Reprojection Attention module in S-Ray to fully exploit
semantic information from the reprojections on multiple
views, so that the learned semantic features have stronger
discriminative power and generalization ability. While di-
rectly performing dense attention over the sampled points
on each reprojected ray of multiple views would suffer from
heavy computational costs, we decompose the dense atten-
tion into intra-view radial and cross-view sparse attentions
to learn comprehensive relations in an efficient manner.
More specifically, for each query point in a novel view,
different from Semantic-NeRF that directly estimates its se-
mantic label with MLP, we reproject it to multiple known
views. It is worth noting that since the emitted ray from
the query point is virtual, we cannot obtain the exact repro-
jected point on each view, but a reprojected ray that shows
possible positions. Therefore, our network is required to si-
multaneously model the uncertainty of reprojection within
each view, and comprehensively exploit semantic context
from multiple views with their respective significance. To
this end, our Cross-Reprojection Attention consists of an
intra-view radial attention module that learns the relations
among sampled points from the query ray, and a cross-view
sparse attention module that distinguishes the same point in
different viewpoints and scores the semantic contribution of
each view. As a result, our S-Ray is aware of the scene prior
with rich patterns and generalizes well to novel scenes. We
evaluate our method quantitatively and qualitatively on syn-
thetic scenes from the Replica dataset [46] and real-world
scenes from the ScanNet dataset [7]. Experiments show
that our S-Ray successfully learns from multiple scenes
and generalizes to unseen scenes. By following Semantic-
NeRF [61], we design competitive baselines based on the
recent MVSNeRF [4] and NeuRay [27] architectures for
generalizable semantic field learning. Our S-Ray signifi-
cantly outperforms these baselines which demonstrates the
effectiveness of our cross-reprojection attention module.
|
Li_LAVENDER_Unifying_Video-Language_Understanding_As_Masked_Language_Modeling_CVPR_2023
|
Abstract
Unified vision-language frameworks have greatly ad-
vanced in recent years, most of which adopt an encoder-
decoder architecture to unify image-text tasks as sequence-
to-sequence generation. However, existing video-language
(VidL) models still require task-specific designs in model
architecture and training objectives for each task. In this
work, we explore a unified VidL framework LAVENDER ,
where Masked Language Modeling [13] (MLM) is used as
the common interface for all pre-training and downstream
tasks. Such unification leads to a simplified model archi-
tecture, where only a lightweight MLM head, instead of
a decoder with much more parameters, is needed on top
of the multimodal encoder. Surprisingly, experimental re-
sults show that this unified framework achieves competi-
tive performance on 14 VidL benchmarks, covering video
question answering, text-to-video retrieval and video cap-
tioning. Extensive analyses further demonstrate LAVEN -
DER can (i) seamlessly support all downstream tasks with
just a single set of parameter values when multi-task fine-
tuned; ( ii) generalize to various downstream tasks with lim-
ited training samples; and ( iii) enable zero-shot evaluation
on video question answering tasks. Code is available at
https://github.com/microsoft/LAVENDER .
|
1. Introduction
Large-scale transformer-based pre-training is now the de
facto practice for NLP and vision-language research [13,
25, 37, 49, 50]. Together with the great success of image-
text pre-training [10, 35, 40, 59], video-language (VidL) pre-
training [29, 33, 58, 80, 83] has also received an increasing
amount of attention. By pre-training an end-to-end multi-
modal transformer on a large number of video-text pairs,
state-of-the-art performance has been achieved across a wide
range of VidL tasks, including video question answering
(QA) [24, 68], text-to-video retrieval [22, 51], and video
captioning [64, 71]. These advances are encouraging; how-
ever, on the other hand, all existing VidL works require
designing task-specific heads on top of the transformer en-
coder for each pre-training or downstream task. For example,
during pre-training, separate Masked Language Modeling
[13] (MLM) and Video Text Matching (VTM) heads are
used, while a new, separately parameterized head needs to
be added for each downstream adaptation. Furthermore, due
to the particular nature of different tasks, they are typically
modeled using different training objectives. For example,
multiple-choice video QA is formulated as a classification
problem, while video captioning is inherently a generation
task. A natural but challenging question arises: can we have
a unified architecture that supports all the popular VidL tasks
simultaneously without introducing task-specific heads?
To answer this, we present LAVENDER , a unified VidL
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23119
TGIF MSRVTT LSMDC MSVD Captioning Retrieval
Action Transition Frame MC QA MC FiB QA MSVD DiDeMo
Published
SOTA94.0 96.2 69.5 90.9 43.1 81.7 52.9 46.3 120.6 65.1
[80] [80] [80] [80] [80] [80] [80] [73] [36] [5]
LAVENDER 94.8 98.7 73.5 97.2 45.0 85.9 57.1 55.6 150.3 72.4
∆ 0.8↑ 2.5↑ 4.0↑ 6.3↑ 1.9↑ 4.2↑ 4.2↑ 9.3↑ 29.7↑ 7.3↑
Table 1. New state-of-the-art performance with LAVENDER (14M+16M pre-train, single-task finetune) across 10 VidL tasks. Accuracy,
average(R1, R5, R10) and CIDEr scores are reported for video QA, retrieval and captioning.
framework where all pre-training and downstream tasks are
formulated as simple MLM. As shown in Figure 1, we use
two pre-training tasks: MLM and VTM. However, for VTM,
instead of adding a head on top of the output of the com-
monly used [CLS] token, as used in all existing works, we
propose to append the same [MASK] token that is used for
MLM at the end of the video-text input, and use the same
MLM head to predict whether the video-text pair matches
or not. Note that VTM is typically formulated as a binary
classification problem; here, we simply treat the output of
true orfalse from VTM as natural language tokens di-
rectly predicted from the whole vocabulary, so that the same
set of parameters can be used for both MLM and VTM.
During downstream adaptation, instead of following stan-
dard practice in VidL literature to replace the MLM head in
pre-training with new heads, we use the same MLM head
used in pre-training for all downstream tasks. Specifically,
•For text-to-video retrieval, we train the model in the same
way as in the VTM pre-training task. During inference,
for each text query, we concatenate it with each candidate
video, and calculate the corresponding probability of the
[MASK] token being predicted as true , and then rank
all candidate videos based on that score.
•For multiple-choice video QA, we concatenate the ques-
tion and each answer candidate sequentially, and add a
[MASK] token at the end of the sequence, and use the
same MLM head to predict the answer as “ n” (assuming
the ground-truth choice is the n-th answer).
•For open-ended video QA, since most of the ground-truth
answers in our tested datasets only contain one word, we
simply append a [MASK] token to the end of the video-
question input, and let the model predict the answer from
the whole vocabulary.
•For video captioning, during training, we mask a certain
percentage of the tokens, and then predict the masked
tokens using a seq2seq attention mask [35, 81]. During
inference, the full caption is auto-regressively predicted,
by inserting [MASK] tokens one at a time.
LAVENDER is inspired by VL-T5 [12], UniTAB [76] and
OFA [63] that aim to provide a unified pre-training frame-
work for image-text tasks. However, LAVENDER adopt an
encoder-only model and an additional lightweight MLM
head on top of it, while a heavy transformer decoder isneeded in [12, 63, 76]. By unifying all VidL tasks as MLM,
LAVENDER can seamlessly adapt to different VidL tasks,
meanwhile ( i) support different VidL tasks with a single set
of parameter values when multi-task finetuned; ( ii) general-
ize to test data under few-shot finetuning; and ( iii) enable
zero-shot inference on video question answering. Surpris-
ingly, by using this simple generative approach, we outper-
form previously published state-of-the-arts on 10 out of 14
downstream tasks (Table 1), even when pre-trained with
much fewer data (Section 4.5).
|
Lin_CLIP_Is_Also_an_Efficient_Segmenter_A_Text-Driven_Approach_for_CVPR_2023
|
Abstract
Weakly supervised semantic segmentation (WSSS) with
image-level labels is a challenging task. Mainstream ap-
proaches follow a multi-stage framework and suffer from
high training costs. In this paper, we explore the potential of
Contrastive Language-Image Pre-training models (CLIP)
to localize different categories with only image-level labels
and without further training. To efficiently generate high-
quality segmentation masks from CLIP , we propose a novel
WSSS framework called CLIP-ES. Our framework improves
all three stages of WSSS with special designs for CLIP: 1)
We introduce the softmax function into GradCAM and ex-
ploit the zero-shot ability of CLIP to suppress the confu-
sion caused by non-target classes and backgrounds. Mean-
while, to take full advantage of CLIP , we re-explore text in-
puts under the WSSS setting and customize two text-driven
strategies: sharpness-based prompt selection and synonym
fusion. 2) To simplify the stage of CAM refinement, we pro-
pose a real-time class-aware attention-based affinity (CAA)
module based on the inherent multi-head self-attention
(MHSA) in CLIP-ViTs. 3) When training the final segmenta-
tion model with the masks generated by CLIP , we introduced
a confidence-guided loss (CGL) focus on confident regions.
Our CLIP-ES achieves SOTA performance on Pascal VOC
2012 and MS COCO 2014 while only taking 10% time of
previous methods for the pseudo mask generation. Code is
available at https://github.com/linyq2117/CLIP-ES.
|
1. Introduction
Semantic segmentation [7,40] aims to predict pixel-level
labels but requires labor-intensive pixel-level annotations.
*Equal contribution.
†Corresponding author.
Original
GradCAM
Ours
Image
train boat
person boat
Confusion with
other categoriesConfusion with
background
Figure 1. Effect of the softmax function on GradCAM of CLIP.
The original GradCAM uses the logit (before the softmax) of the
target class to compute gradient. We propose to compute gradient
based on the probability (after the softmax). It can avoid confusion
between the target class and background (the first two columns)
and other object classes in the dataset (the last two columns).
Weakly supervised semantic segmentation (WSSS) is pro-
posed to reduce the annotation cost. WSSS only requires
weak supervision, e.g., image-level labels [2], bounding
boxes [10, 33], points [4] or scribbles [31, 42]. The most
commonly used one is WSSS with image-level annotations,
which is the focus of our paper.
Previous WSSS approaches [24, 43, 46, 48] with image-
level labels typically follow a three-stage framework. First,
a classification model is trained on the specific dataset to
generate initial CAMs (Class Activation Maps). Then, the
initial CAMs are refined by the pixel affinity network [1, 2]
or extra saliency maps [18, 41]. At last, the refined CAMs
serve as the pseudo masks to train a semantic segmentation
model. Obviously, this multi-stage framework is compli-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15305
cated as it needs to train multiple models at different stages,
especially the separate classification model and affinity net-
work in the first two stages. Although some end-to-end
methods [3, 50] are proposed to improve efficiency, they
tend to achieve poor performance compared to multi-stage
methods. Therefore, it is a challenge to simplify the proce-
dure of WSSS while maintaining its high performance.
Recently, the Contrastive Language-Image Pre-training
(CLIP) [34], a model pre-trained on 400 million image-text
pairs from the Internet to predict if an image and a text snip-
pet are matched, has shown great success in the zero-shot
classification. This dataset-agnostic model could transfer
to unseen datasets directly. Besides, the powerful text-to-
image generation ability of CLIP, i.e., DALL-E2 [35], in-
dicates the strong relation between texts and corresponding
components in the image. On the other hand, multi-head
self-attention (MHSA) in ViT [12] reflects semantic affinity
among patches and has the potential to substitute for affin-
ity network. Motivated by these, we believe CLIP with ViT
architecture could simplify the procedure of WSSS and lo-
calize categories in the image through well-designed texts.
This paper proposes a new framework, CLIP-ES, to im-
prove each stage in terms of efficiency and accuracy for
WSSS. In the first stage, the generated CAMs are usually
redundant and incomplete. Most methods [43,45] are based
on binary cross-entropy for multi-label classification. The
loss is not mutually exclusive, so the generated CAMs suf-
fer from confusion between foreground and non-target fore-
ground categories, e.g., person and cow, or foreground and
background categories, e.g., boat and water, as shown in
Fig. 1. The incompleteness stems from the gap between
the classification and localization tasks, causing CAMs only
focus on discriminative regions. To solve the confusion
problems above, we introduce the softmax function into
GradCAM to make categories mutually exclusive and de-
fine a background set to realize class-related background
suppression. To get more complete CAMs and fully en-
joy the merits inherited from CLIP, we investigate the ef-
fect of text inputs in the setting of WSSS and design two
task-specific text-driven strategies: sharpness-based prompt
selection and synonym fusion.
In the second stage, instead of training an affinity net-
work as in previous works, we leverage the attention ob-
tained from the vision transformer. However, the attention
map is class-agnostic, while the CAM is class-wise. To
bridge this gap, we propose a class-aware attention-based
affinity (CAA) module to refine the initial CAMs in real-
time, which can be integrated into the first stage. Without
fine-tuning CLIP on downstream datasets, our method re-
tains CLIP’s generalization ability and is flexible to gener-
ate pseudo labels for new classes and new datasets.
In the last stage, the pseudo masks from the refined
CAMs are viewed as ground truth to train a segmentationmodel in a fully supervised manner. However, the pseudo
mask may be noisy and directly applied to training may mis-
lead the optimization process. We proposed a confidence-
guided loss (CGL) for training the final segmentation model
by ignoring the noise in pseudo masks.
Our contributions are summarized as follows:
• We propose a simple yet effective framework for
WSSS based on frozen CLIP. We reveal that given
only image-level labels, CLIP can perform remarkable
semantic segmentation without further training. Our
method can induce this potential of localizing objects
that exists in CLIP.
• We introduce the softmax function into GradCAM and
design a class-related background set to overcome cat-
egory confusion problems. To get better CAMs, some
text-driven strategies inherited from CLIP are explored
and specially redesigned for WSSS.
• We present a class-aware attention-based affinity mod-
ule (CAA) to refine the initial CAMs in real time,
and introduce confidence-guided loss (CGL) to miti-
gate the noise in pseudo masks when training the final
segmentation model.
• Experiment results demonstrate that our framework
can achieve SOTA performance and is 10x efficient
than other methods when generating pseudo masks.
|
Li_Learning_To_Fuse_Monocular_and_Multi-View_Cues_for_Multi-Frame_Depth_CVPR_2023
|
Abstract
Multi-frame depth estimation generally achieves high
accuracy relying on the multi-view geometric consistency.
When applied in dynamic scenes, e.g., autonomous driving,
this consistency is usually violated in the dynamic areas,
leading to corrupted estimations. Many multi-frame meth-
ods handle dynamic areas by identifying them with explicit
masks and compensating the multi-view cues with monocu-
lar cues represented as local monocular depth or features.
The improvements are limited due to the uncontrolled qual-
ity of the masks and the underutilized benefits of the fu-
sion of the two types of cues. In this paper, we propose a
novel method to learn to fuse the multi-view and monocu-
lar cues encoded as volumes without needing the heuristi-
cally crafted masks. As unveiled in our analyses, the multi-
view cues capture more accurate geometric information in
static areas, and the monocular cues capture more useful
contexts in dynamic areas. To let the geometric percep-
tion learned from multi-view cues in static areas propagate
to the monocular representation in dynamic areas and let
monocular cues enhance the representation of multi-view
cost volume, we propose a cross-cue fusion (CCF) module,
which includes the cross-cue attention (CCA) to encode the
spatially non-local relative intra-relations from each source
to enhance the representation of the other. Experiments on
real-world datasets prove the significant effectiveness and
generalization ability of the proposed method.
|
1. Introduction
Depth estimation is a fundamental and challenging task
for 3D scene understanding in various application scenar-
ios, such as autonomous driving [10, 16, 27]. With the
advent of convolutional neural networks (CNNs) [12, 18],
depth estimation methods [2, 24–26, 40, 45, 46] are capable
of predicting promising results given either single or mul-
Corresponding author: Y . Zhang, D. Gong, W. Yin.
Abs.RelGT DepthInput Multi-frame
CuesMonocular
CuesOurs0.5
0.0
80m
0mFigure 1. Depth estimation in dynamic scenes. Multi-frame pre-
dictions reserve high overall accuracy while degrading in dynamic
areas. The monocular method better handles moving areas while
suffering in static areas. Our method fuses both multi-frame and
monocular depth cues for final prediction, yielding superior per-
formance of the whole scene.
tiple images. The single image-based methods learn the
monocular cues, e.g., the texture or object-level features,
to predict the depth [2, 42, 44] , while multi-frame meth-
ods [36,37,40] can generally obtain higher overall accuracy
relying on the multi-view geometric cues. Specifically, the
3D cost volume has been proved simple and effective for
depth estimation, which encodes the multi-frame matching
probabilities with a set of depth hypotheses [15, 37, 40].
Although multi-frame methods are widely used in scene
reconstruction [15, 34, 40], they encounter non-negligible
challenges in the dynamic scenes with dynamic areas ( e.g.,
moving cars and pedestrians). The dynamic areas cause
corrupted values in the cost volume due to the violation of
multi-view consistency [9,33] and mislead the network pre-
dictions. However, depth estimation for the dynamic areas
is usually crucial in most applications [10,16,19]. As shown
in Fig. 1, multi-frame depth estimation for the dynamic cars
is more challenging than the static backgrounds.
To handle the dynamic areas violating the multi-view
consistency, a few multi-frame depth estimation methods
[9, 36, 37] try to identify and exclude the dynamic areas
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21539
through an explicit mask obtained relying on some assump-
tions or heuristics. Specifically, some method [37] excludes
the multi-frame cost volume relying on a learned dynamic
mask and compensates the excluded areas with monocular
features; some methods directly adjust the dynamic object
locations in input images [9] or supervise multi-frame depth
[36] with predicted monocular depth. However, these meth-
ods are usually sensitive to the explicit mask’s quality, and
the masks are obtained from additional networks or manu-
ally crafted criteria [9, 36, 37]. Despite better performances
than pure multi-frame methods, these methods exhibit lim-
ited performance improvement compared with the addition-
ally introduced monocular cues (as shown in Tab. 4), im-
plying underutilized benefits from the fusion of the multi-
view and monocular cues. Although some self-supervised
monocular depth estimation methods [4,5,11,14,22,23] also
address the multi-view inconsistency issues, they mainly fo-
cus on handling the unfaithful self-supervision signals.
To tackle the above issues, we propose a novel method
that fuses the respective benefits from the monocular and
multi-view cues, leading to significant improvement upon
each individual source in dynamic scenes. We first ana-
lyze the behaviors of monocular and multi-frame cues in
dynamic scenes, that the pure monocular method can gen-
erally learn good structural relations around the dynamic
areas, and the pure multi-view cue preserves more accurate
geometric properties in the static areas. We then unveil the
effectiveness of leveraging the benefits of both depth cues
by directly fusing depth volumes (Sec. 3). Inspired by the
above observations, beyond treating monocular cues as a lo-
cal supplement of multi-frame methods [9, 36, 37], we pro-
pose a cross-cue fusion (CCF) module to enhance the repre-
sentations of multi-view and monocular cues with the other,
and fuse them together for dynamic depth estimation. We
use the spatially non-local relative intra-relations encoded
incross-observation attention (CCA) weights from each
source to guide the representation of the other, as shown
in Fig. 4. Specifically, the intra-relations of monocular cues
can help to address multi-view inconsistency in dynamic ar-
eas, while the intra-relations from multi-view cues help to
enhance the geometric property of the monocular represen-
tation, as visualized in Fig. 5. Unlike [1, 9, 36, 37], the
proposed method unifies the input format of both cues as
volumes and conducts fusion on them, which achieves bet-
ter performances (as shown in Fig. 6). The proposed fu-
sion module is learnable and does not require any heuristic
masks, leading to better generalization and flexibility.
Our main contributions are summarized as follows:
• We analyze multi-frame and monocular depth estima-
tions in dynamic scenes and unveil their respective
advantages in static and dynamic areas. Inspired by
this, we propose a novel method that fuses depth vol-
umes from each cue to achieve significant improve-ment upon individual estimations in dynamic scenes.
• We propose a cross-cue fusion (CCF) module that uti-
lizes the cross-cue attention to encode non-local intra-
relations from one depth cue to guide the representa-
tion of the other. Different from methods using local
masks, the attention weights learn mask-free global ge-
ometric information according to the geometric prop-
erties of each depth cue (as shown in Fig. 5).
• The proposed method outperforms the state-of-the-art
method in dynamic areas with a significant error reduc-
tion of 21.3%while retaining its superiority in overall
performance on KITTI. It also achieves the best gener-
alization ability on the DDAD dataset in dynamic areas
than the competing methods.
|
Liu_SimpleNet_A_Simple_Network_for_Image_Anomaly_Detection_and_Localization_CVPR_2023
|
Abstract
We propose a simple and application-friendly network
(called SimpleNet) for detecting and localizing anoma-
lies. SimpleNet consists of four components: (1) a pre-
trained Feature Extractor that generates local features, (2)
a shallow Feature Adapter that transfers local features to-
wards target domain, (3) a simple Anomaly Feature Gener-
ator that counterfeits anomaly features by adding Gaussian
noise to normal features, and (4) a binary Anomaly Dis-
criminator that distinguishes anomaly features from normal
features. During inference, the Anomaly Feature Generator
would be discarded. Our approach is based on three in-
tuitions. First, transforming pre-trained features to target-
oriented features helps avoid domain bias. Second, gen-
erating synthetic anomalies in feature space is more ef-
fective, as defects may not have much commonality in the
image space. Third, a simple discriminator is much effi-
cient and practical. In spite of simplicity, SimpleNet outper-
forms previous methods quantitatively and qualitatively. On
*Corresponding author
0 10 20 30 40 50 60 70 80
Frames per second(FPS)97.097.598.098.599.099.5I-AUROC%PatchCore-WRN50-1024-10%
PatchCore-WRN50_1024-1%
DRÆMCS-Flow
PaDiM-WRN50-550d
PaDiM-RN18-100dReverseDistillationOursFigure 2. Inference speed (FPS) versus I-AUROC on MVTec AD
benchmark. SimpleNet outperforms all previous methods on both
accuracy and efficiency by a large margin.
the MVTec AD benchmark, SimpleNet achieves an anomaly
detection AUROC of 99.6%, reducing the error by 55.5%
compared to the next best performing model. Further-
more, SimpleNet is faster than existing methods, with a
high frame rate of 77 FPS on a 3080ti GPU. Additionally,
SimpleNet demonstrates significant improvements in per-
formance on the One-Class Novelty Detection task. Code:
https://github.com/DonaldRR/SimpleNet .
|
1. Introduction
Image anomaly detection and localization task aims to
identify abnormal images and locate abnormal subregions.
The technique to detect the various anomalies of interest has
a broad set of applications in industrial inspection [3, 6]. In
industrial scenarios, anomaly detection and localization is
especially hard, as abnormal samples are scarce and anoma-
lies can vary from subtle changes such as thin scratches to
large structural defects, e.g. missing parts. Some examples
from the MVTec AD benchmark [3] along with results from
our proposed method are shown in Figure 1. This situation
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20402
prohibits the supervised methods from approaching.
Current approaches address this problem in an unsuper-
vised manner, where only normal samples are used dur-
ing the training process. The reconstruction-based meth-
ods [10, 21, 31], synthesizing-based methods [17, 30], and
embedding-based methods [6, 22, 24] are three main trends
for tackling this problem. The reconstruction-based meth-
ods such as [21,31] assume that a deep network trained with
only normal data cannot accurately reconstruct anomalous
regions. The pixel-wise reconstruction errors are taken as
anomaly scores for anomaly localization. However, this as-
sumption may not always hold, and sometimes a network
can ”generalize” so well that it can also reconstruct the ab-
normal inputs well, leading to misdetection [10, 19]. The
synthesizing-based methods [17, 30] estimate the decision
boundary between the normal and anomalous by training
on synthetic anomalies generated on anomaly-free images.
However, the synthesized images are not realistic enough.
Features from synthetic data might stray far from the normal
features, training with such negative samples could result in
a loosely bounded normal feature space, meaning indistinct
defects could be included in in-distribution feature space.
Recently, the embedding-based methods [6, 7, 22, 24]
achieve state-of-the-art performance. These methods use
ImageNet pre-trained convolutional neural networks (CNN)
to extract generalized normal features. Then a statistical al-
gorithm such as multivariate Gaussian distribution [6], nor-
malizing flow [24], and memory bank [22] is adopted to em-
bed normal feature distribution. Anomalies are detected by
comparing the input features with the learned distribution or
the memorized features. However, industrial images gener-
ally have a different distribution from ImageNet. Directly
using these biased features may cause mismatch problems.
Moreover, the statistical algorithms always suffer from high
computational complexity or high memory consumption.
To mitigate the aforementioned issues, we propose a
novel anomaly detection and localization network, called
SimpleNet. SimpleNet takes advantage of the synthesizing-
based and the embedding-based manners, and makes sev-
eral improvements. First, instead of directly using pre-
trained features, we propose to use a feature adaptor to
produce target-oriented features which reduce domain bias.
Second, instead of directly synthesizing anomalies on the
images, we propose to generate anomalous features by pos-
ing noise to normal features in feature space. We argue
that with a properly calibrated scale of the noise, a closely
bounded normal feature space can be obtained. Third, we
simplify the anomalous detection procedure by training a
simple discriminator, which is much more computational
efficient than the complex statistical algorithms adopted by
the aforementioned embedding-based methods. Specifi-
cally, SimpleNet makes use of a pre-trained backbone for
normal feature extraction followed by a feature adapter totransfer the feature into the target domain. Then, anomaly
features are simply generated by adding Gaussian noise to
the adapted normal features. A simple discriminator con-
sisting of a few layers of MLP is trained on these features
to discriminate anomalies.
SimpleNet is easy to train and apply, with outstand-
ing performance and inference speed. The proposed Sim-
pleNet, based on a widely used WideResnet50 backbone,
achieves 99.6 % AUROC on MVTec AD while running at
77 fps, surpassing the previous best-published anomaly de-
tection methods on both accuracy and efficiency, see Fig-
ure 2. We further introduce SimpleNet to the task of One-
Class Novelty Detection to show its generality. These ad-
vantages make SimpleNet bridge the gap between academic
research and industrial application. Code will be publicly
available.
|
Lin_Cross-Domain_3D_Hand_Pose_Estimation_With_Dual_Modalities_CVPR_2023
|
Abstract
Recent advances in hand pose estimation have shed
light on utilizing synthetic data to train neural networks,
which however inevitably hinders generalization to real-
world data due to domain gaps. To solve this problem,
we present a framework for cross-domain semi-supervised
hand pose estimation and target the challenging scenario of
learning models from labelled multi-modal synthetic data
and unlabelled real-world data. To that end, we propose
a dual-modality network that exploits synthetic RGB and
synthetic depth images. For pre-training, our network uses
multi-modal contrastive learning and attention-fused super-
vision to learn effective representations of the RGB images.
We then integrate a novel self-distillation technique during
fine-tuning to reduce pseudo-label noise. Experiments show
that the proposed method significantly improves 3D hand
pose estimation and 2D keypoint detection on benchmarks.
|
1. Introduction
Hand pose estimation supports a wide range of appli-
cations, including sign language recognition [18, 19] and
gesture-based interaction systems [1]. However, it is dif-
ficult to obtain the large amounts of accurate ground truth
labels required for training deep-learning-based hand pose
estimation systems. Training models with synthetic data is
one option, but such models exhibit a sim-to-real domain
gap and generalize poorly to real-world settings. More so-
phisticated synthesis can narrow this gap, but the perfor-
mance drop is still noticeable [23].
This paper addresses the cross-domain pose estimation
problem and focuses on a semi-supervised setting. We tar-
get learning from labelled synthetic data and unlabelled
real-world data for application to real-world data. Pre-
training with synthetic data is common [14, 32]; surpris-
ingly, only RGB synthetic images have been considered.
Yet when generating synthetic data, it is relatively easy to
render multiple data modalities. For example, the RHD
*Equal contribution
RGB inputRGBattentionRGBattentionoverlayDepthattentionDepthattentionoverlay
Figure 1. Attention comparisons between RGB and depth maps.
Apart from the hand region, the RGB attention shows high acti-
vation on the background. The depth map attention however pro-
vides high activation only on the hand, confirming its strength to
focus on task-relevant information.
dataset [41] features both RGB images and depth maps.
Across the modalities, there are shared common visual
cues relating to the underlying geometry or semantics that
are task-relevant for hand pose estimation. To exploit such
information during pre-training, a simple solution is to ap-
ply pixel-wise ℓ1alignment between feature maps of RGB
and depth maps [37]. However, the large discrepancy be-
tween RGB images and depth maps might cause the pixel-
wise alignment to also focus on irrelevant regions related
to the background. Fig. 1 shows the attention derived from
RGB versus depth maps and their overlays on the original
RGB input. We observe that the RGB attention is strong in
task-related areas, i.e. the hand, but also on unrelated areas
of the background. In contrast, the attention from the depth
map is successfully localized on the hand.
To focus more on relevant regions, we propose a dual-
modality network for RGB images and depth maps that
improves RGB-based hand pose estimation via a multi-
level alignment. Specifically, we design an attention mod-
ule to apply information learned from depth maps to the
RGB image to produce attention-fused RGB features. The
learned information is then aligned to RGB features by
multi-modal supervision on the predictions from all modal-
ities with ground truth. This limits the RGB encoder’s sen-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17184
sitivity to non-informative cues such as background or ir-
relevant textures [10, 34]. At the feature space level, we ex-
plore intra-modal and inter-modal contrastive learning for
multi-modal data. Our work is the first to investigate su-
pervised multi-modal contrastive learning on hand pose es-
timation. In particular, contrastive learning between RGB
features and attention-fused RGB features minimizes fea-
ture discrepancies across modalities.
After pre-training, pseudo-labelling is commonly used
to fine-tune on unlabelled data [4,20,22]. However, na ¨ıvely
generated pseudo-labels are inevitably noisy and deteriorate
model performance. To handle noisy pseudo-labels, we de-
sign a two-pass self-distillation procedure (see Fig. 2 (b)).
The RGB input is first applied through the multimodal de-
coder to predict a depth map and pose. In a second pass,
the same RGB input is applied together with the predicted
depth map to estimate the pose. As the attention from depth
maps activates RGB features in relevant regions, the two
passes can be considered a refinement or denoising process.
The pose predicted from the RGB in the first pass is encour-
aged to be consistent with the pose from the attention-fused
version in the second pass. Such a procedure distills knowl-
edge from within the network itself. The self-distillation
also prevents the network from over-fitting to noisy samples
that often have unstable predictions [17, 20].
In summary, we make the following contributions:
1. We propose a dual-modality network that learns from
RGB images and depth maps but is applicable to
only RGB inputs for fine-tuning and inference. The
network features a specially designed attention mod-
ule that identifies geometric relationships common to
RGB and depth from stand-alone RGB images.
2. We propose the first supervised multi-modal con-
trastive learning method based on fused features to
minimize feature discrepancies across modalities.
3. We introduce a self-distillation procedure to exploit yet
not over-fit to noisy pseudo-labels during fine-tuning.
4. The proposed method significantly improves the state-
of-the-art by up to 16.0% and14.8% for 2D keypoint
detection and 3D keypoint estimation respectively.
|
Li_Generalized_Deep_3D_Shape_Prior_via_Part-Discretized_Diffusion_Process_CVPR_2023
|
Abstract
We develop a generalized 3D shape generation prior
model, tailored for multiple 3D tasks including uncondi-
tional shape generation, point cloud completion, and cross-
modality shape generation, etc. On one hand, to pre-
cisely capture local fine detailed shape information, a vec-
tor quantized variational autoencoder (VQ-VAE) is utilized
to index local geometry from a compactly learned code-
book based on a broad set of task training data. On the
other hand, a discrete diffusion generator is introduced to
model the inherent structural dependencies among different
tokens. In the meantime, a multi-frequency fusion module
(MFM) is developed to suppress high-frequency shape fea-
ture fluctuations, guided by multi-frequency contextual in-
formation. The above designs jointly equip our proposed
3D shape prior model with high-fidelity, diverse features
as well as the capability of cross-modality alignment, and
extensive experiments have demonstrated superior perfor-
mances on various 3D shape generation tasks.
|
1. Introduction
While pre-trained 2D prior models [24, 43] have shown
great power in various downstream vision tasks such as im-
age classification, editing and cross-modality generation,
etc., their counterpart 3D prior models which are gener-
ally beneficial for three-dimensional shape generation tasks
have NOT been well developed, unfortunately. On the
contrary, the graphics community has developed a num-
ber of task-specific pre-trained models, tailored for unary
tasks such as 3D shape generation [22, 31, 60], points
cloud completion [62, 66, 67] and conditional shape pre-
diction [14, 30, 34, 54]. Since above individual 3D gen-
erative representations do NOT share common knowledge
among tasks, to migrate a trained 3D shape network from
one task to another related one requires troublesome end-to-
end model re-work and training resources are also wasted.
For instance, a good shape encoding of “chairs” based on a
general prior 3D model could benefit shape completion of a
given partial chair and text-guided novel chair generation.
†Corresponding author: Bingbing Ni.
Figure 1. Our shape generation model is a unified and efficient
prior model to produce high-fidelity, diverse results on multiple
tasks, while most previous approaches are task-specific.
This work aims at designing a unified 3D shape gener-
ative prior model, which serves as a generalized backbone
for multiple downstream tasks, i.e., with few requirements
for painstaking adaptation of the prior model itself. Such a
general 3D prior model should possess the following good
properties. On one hand, it should be expressive enough
to generate high-fidelity shape generation results with fine-
grained local details. On the other hand, this general prior
should cover a large probabilistic support region so that it
could sample diverse shape prototypes in both conditional
and unconditional generation tasks. Moreover, to well sup-
port cross-modal generation, e.g., text-to-shape, the sam-
pled representation from the prior model should achieve
good semantic consistency between different modalities,
making it easy to encode and match partial shapes, images
and text prompts.
The above criteria, however, are rarely satisfied by exist-
ing 3D shape modeling approaches. Encoder-decoder based
structures [62, 66] usually focus on dedicated tasks and fail
to capture diverse shape samples because the generation
process mostly relies on features sampled from determinis-
tic encoders. On the contrary, probabilistic models such as
GAN [29, 48], Flow [60] and diffusion models [22, 31, 67],
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16784
cannot either adapt to multiple tasks flexibly or generate
high-quality shapes without artifacts [64]. Note that re-
cent models such as AutoSDF [34] and Lion [64] explore
multiple conditional generative frameworks. However, Au-
toSDF [34] is defective in diversity and shows mode col-
lapse in unconditional and text-guided generation, while
training and inference of Lion [64] are costly.
In view of above limitations, this work proposes
an efficient 3D-Disassemble-Quantization-and-Diffusion
(3DQD) prior model to simultaneously address above chal-
lenges in 3D shape generation: high-fidelity, diversity, and
good cross-modality alignment ability, as shown in Fig. 1.
This 3DQD prior is a unified, probabilistic and powerful
backbone for both unconditional shape generation and mul-
tiple conditional shape completion applications. On one
hand, instead of using holistic codes for whole-shape, we
adopt a vector quantized variational autoencoder [52] (VQ-
V AE) that learns a compact representation for disassembled
local parts of the shape, which is expected to well repre-
sent diverse types of local geometry information out of a
broad set of training shapes from shared tasks, forming a
generalized part-level codebook. Note that this disassem-
bled and quantized representation effectively eliminates the
intrinsic structural bias between different modalities, and
thus is able to perform good cross-modality data alignment
on conditional generation tasks. On the other hand, a dis-
crete diffusion generator [4, 16] with reverse Markov chain
is introduced to model the inherent semantic dependen-
cies among geometric tokens. Namely, the forward process
corrupts the latent variables with progressively increasing
noise, transferring parts of variables into random or masked
ones; and the reverse process gradually recovers the vari-
ables towards the desired data distribution by learning the
structural connections among geometry tokens. It is worth
mentioning that random corruption in the forward process
facilitates diverse samples, while discrete embedding re-
sults in a dramatic cost-down in computational budget, with
stable and iterative sampling. Furthermore, during shape
generation we introduce Multi-frequency Fusion Module
(MFM) to suppress high-frequency outliers, encouraging
smoother and high-fidelity samples.
Comprehensive and various downstream experiments
on shape generation tasks demonstrate that the proposed
3DQD prior is capable of helping synthesize high-fidelity,
diverse shapes efficiently, outperforming multiple state-of-
the-art methods. Furthermore, our 3DQD prior model can
serve as a generalized backbone with highly competitive
samples for extended applications and cross-domain tasks
requiring NO or little tuning.
|
Kotwal_Swept-Angle_Synthetic_Wavelength_Interferometry_CVPR_2023
|
Abstract
We present a new imaging technique, swept-angle syn-
thetic wavelength interferometry, for full-field micron-scale
3D sensing. As in conventional synthetic wavelength interfer-
ometry, our technique uses light consisting of two narrowly-
separated optical wavelengths, resulting in per-pixel inter-
ferometric measurements whose phase encodes scene depth.
Our technique additionally uses a new type of light source
that, by emulating spatially-incoherent illumination, makes
interferometric measurements insensitive to aberrations and
(sub)surface scattering, effects that corrupt phase measure-
ments. The resulting technique combines the robustness to
such corruptions of scanning interferometric setups, with
the speed of full-field interferometric setups. Overall, our
technique can recover full-frame depth at a lateral and ax-
ial resolution of 5µm, at frame rates of 5 Hz, even under
strong ambient light. We build an experimental prototype,
and use it to demonstrate these capabilities by scanning a
variety of objects, including objects representative of ap-
plications in inspection and fabrication, and objects that
contain challenging light scattering effects.
|
1. Introduction
Depth sensing is among the core problems of computer
vision and computational imaging, with widespread appli-
cations in medicine, industry, and robotics. An array of
techniques is available for acquiring depth maps of three-
dimensional (3D) scenes at different scales. In particular,
micrometer-resolution depth sensing, our focus in this pa-
per, is important in biomedical imaging because biological
features are often micron-scale, industrial fabrication and
inspection of critical parts that must conform to their specifi-
cations (Figure 1), and robotics to handle fine objects.
Active illumination depth sensing techniques such as li-
dar, structured light, and correlation time-of-flight (ToF)
cannot provide micrometer axial resolution. Instead, we
focus on interferometric techniques that can achieve such
resolutions. The operational principles and characteristics
of interferometric techniques vary, depending on the type of
active illumination and optical configurations they use.
The choice of illumination spectrum leads to techniques
such as optical coherence tomography (OCT), which uses
broadband illumination, and phase-shifting interferometry
(PSI), which uses monochromatic illumination. We consider
airplanewing section3D-printed coin
inspectionfabrication
1 mmphoto of scanned objectimage of scanned sceneswept-angle (ours)without swept-angle
1 mmFigure 1. Applications of swept-angle SWI in industrial inspec-
tion and fabrication. We show depth reconstructions for two
scenes representative of these applications: millimeter-scale dents
on an aircraft fuselage section, and a 3D-printed coin.
synthetic wavelength interferometry (SWI), which operates
between these two extremes: By using illumination con-
sisting of two narrowly-separated optical wavelengths, SWI
provides a controllable trade-off between the large unam-
biguous depth range of OCT, and the large axial resolution of
PSI. SWI can achieve micrometer resolution at depth ranges
in the order of hundreds of micrometers.
The choice of optical configuration results in full-field
versus scanning implementations, which offer different trade-
offs. Full-field implementations acquire entire 2D depth
maps, offering simultaneously fast operation and pixel-level
lateral resolutions. However, full-field implementations are
very sensitive to effects that corrupt depth estimation, such
as imperfections in free-space optics (e.g., lens aberrations)
and indirect illumination (e.g., subsurface scattering). By
contrast, scanning implementations use beam steering to se-
quentially scan points in a scene and produce a 2D depth
map. Scanning implementations offer robustness to depth
corruption effects, through the use of fiber optics to reduce
aberrations, and co-axial illumination and sensing to elimi-
nate most indirect illumination. However, scanning makes
acquiring depth maps with pixel-level lateral resolution and
megapixel sizes impractically slow.
We develop a 3D sensing technique, swept-angle syn-
thetic wavelength interferometry , that combines the com-
plementary advantages of full-field and scanning implemen-
tations. We draw inspiration from previous work showing
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8233
mm
0.2 0
billsceneinput imagedepth surfacedepth map
1mmFigure 2. Reconstructing the eagle embossed on a $20 bill . The features on the eagle are raised 10µmoff the surface of the bill. The
recovered depth shows fine details such as the gaps between the wings reconstructed with high lateral and axial resolution.
that the use of spatially-incoherent illumination in full-field
interferometry mitigates aberration and indirect illumination
effects. We design a light source that emulates spatial inco-
herence, by scanning within exposure a dichromatic point
source at the focal plane of the illumination lens, effectively
sweeping the illumination angle—hence swept-angle .W e
combine this light source with full-field SWI, resulting in a
3D sensing technique with the following performance charac-
teristics: 5 Hz full-frame 2M p i x e l acquisition; 5µmlateral
and axial resolution; range and resolution tunability; and
robustness to aberrations, ambient illumination, and indi-
rect illumination. We build an experimental prototype, and
use it to demonstrate these capabilities, as in Figure 2.W e
provide setup details, reconstruction code, and data in the
supplement and project website.1
Potential impact. Swept-angle SWI is relevant for critical
applications, including industrial fabrication and inspection.
In industrial fabrication, swept-angle SWI can be used to pro-
vide feedback during additive and subtractive manufacturing
processes [ 67]. In industrial inspection, swept-angle SWI
can be used to examine newly-fabricated or used in-place
critical parts and ensure they comply with operational speci-
fications. Swept-angle SWI, uniquely among 3D scanning
techniques, offers a combination of features that are critical
for both applications: First, high acquisition speed, which is
necessary to avoid slowing down the manufacturing process,
and to perform inspection efficiently. Second, micrometer
lateral and axial resolution, which is necessary to detect crit-
ical defects. Third, robustness to aberrations and indirect
illumination, which is necessary because of the materials
used for fabrication, which often have strong subsurface scat-
tering. Figure 1showcases results representative of these
applications: We scan a fuselage section from a Boeing air-
craft, to detect critical defects such as scratches and bumps
from collisions, at axial and lateral scales of a couple dozen
micrometers. We also scan a coin pattern 3D-printed by a
commercial material printer on a translucent material.
|
Khwanmuang_StyleGAN_Salon_Multi-View_Latent_Optimization_for_Pose-Invariant_Hairstyle_Transfer_CVPR_2023
|
Abstract
Our paper seeks to transfer the hairstyle of a reference
image to an input photo for virtual hair try-on. We target
a variety of challenges scenarios, such as transforming a
long hairstyle with bangs to a pixie cut, which requires re-
moving the existing hair and inferring how the forehead
would look, or transferring partially visible hair from a
hat-wearing person in a different pose. Past solutions lever-
age StyleGAN for hallucinating any missing parts and pro-
ducing a seamless face-hair composite through so-called
GAN inversion or projection. However, there remains a chal-
lenge in controlling the hallucinations to accurately transfer
hairstyle and preserve the face shape and identity of the
input. To overcome this, we propose a multi-view optimiza-
tion framework that uses two different views of reference
composites to semantically guide occluded or ambiguous
regions. Our optimization shares information between two
poses, which allows us to produce high fidelity and realistic
results from incomplete references. Our framework produces
high-quality results and outperforms prior work in a user
study that consists of significantly more challenging hair
transfer scenarios than previously studied. Project page:
https://stylegan-salon.github.io/ .
|
1. Introduction
What makes Jennifer Aniston keep her same hairstyle for
over three decades? Perhaps she likes the classic, or perhaps
changing her hairstyle is a decision too high-stakes that she
could later regret. Unlike garments or makeup, trying on a
new hairstyle is not easy, and being able to imagine yourself
in different hairstyles could be an indispensable tool.
Recent approaches for hairstyle transfer, StyleY-
ourHair [20], Barbershop [46], LOHO [30], and Michi-
GAN [35], allow users to manipulate multiple hair attributes
of an input image, such as appearance, shape, or color by pro-
viding a reference image for each different attribute. These
methods [20, 30, 46] rely on a generative adversarial net-
work [11], specifically StyleGAN2 [19], which can synthe-
size highly realistic face images. Their key idea, which also
forms the basis of our method, is to leverage the realistic face
distribution learned by StyleGAN and search for a hairstyle-
transfer output whose latent code lies within the learned
distribution using optimization (commonly known as GAN
projection or inversion).
Our extensive study on these state-of-the-art techniques
still reveal several unsolved challenges for in-the-wild
hairstyle transfer . One of the main challenges is when the
reference hair comes from a person with a very different head
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8609
pose or facial shape. In this case, the transfer result often
degrades significantly [30, 46]. HairFIT [8] and the recently
proposed StyleYourHair [20] both attempt to solve this using
an additional alignment step to align the pose of the target
hair to the input face. In HairFIT [8], this is done explicitly
via a flow-based warping module for hair segmentation, but
this requires training on multi-view datasets [21, 26]. Sty-
leYourHair [20] avoids this issue and also improves upon
HairFIT’s results by optimizing for an aligned pose within
StyleGAN2’s latent space using distances between the de-
tected facial keypoints. While StyleYourHair can handle a
certain degree of misalignment, it often struggles to preserve
details of the reference hair texture, especially for intricate
and non-straight hairstyles, e.g., in Figure 3.
In general, we have observed that there is a trade-off be-
tween hallucinating new details, which is crucial for handling
highly different poses, and preserving the original texture
from the reference images. These two goals are often at odds
with each others. This trade-off is also evident in the results
from StyleYourHair, as reported in [20] and as shown in ta-
ble 3, where BarberShop [46] can still produce better results,
for example, when the pose difference is small.
We tackle this dilemma by performing a multi-stage op-
timization. This serves two purposes: first, to hallucinate
new details necessary for aligning the poses, and second, to
recover face-hair details from the original input images. We
preserve these details in a form of two guide images in both
viewpoints, which will be jointly optimized to allow new
details to be filled while retaining face-hair texture in the
original pose. Our pose alignment is done both explicitly via
3D projection [6] on RGB images, and implicitly via latent
code(s) sharing during our multi-view optimization.
In summary, our contributions are as follows:
1.We propose StyleGAN Salon: a pose-invariant hairstyle
transfer pipeline that is flexible enough to handle a vari-
ety of challenging scenarios including, but not limited
to, bangs/hat removal and background inpainting.
2.Unlike previous works, our method operates entirely on
RGB images, which are more flexible than segmenta-
tion masks. This allows us to first draft the output and
then refine them via multi-stage optimization.
3.We introduce multi-view optimization for hairstyle
transfer which incorporates 3D information to align
the poses for both face and hair images. Our method
leverages both views to help preserve details from the
original images.
4.We thoroughly analyze the results in several experi-
ments, including a user study with detailed breakdown
into various challenging scenarios. Our method shows
superior results over existing works in allscenarios.
|
Kim_The_Devil_Is_in_the_Points_Weakly_Semi-Supervised_Instance_Segmentation_CVPR_2023
|
Abstract
In this paper, we introduce a novel learning scheme
named weakly semi-supervised instance segmentation (WS-
SIS) with point labels for budget-efficient and high-
performance instance segmentation. Namely, we consider a
dataset setting consisting of a few fully-labeled images and
a lot of point-labeled images. Motivated by the main chal-
lenge of semi-supervised approaches mainly derives from
the trade-off between false-negative and false-positive in-
stance proposals, we propose a method for WSSIS that
can effectively leverage the budget-friendly point labels as
a powerful weak supervision source to resolve the chal-
lenge. Furthermore, to deal with the hard case where the
amount of fully-labeled data is extremely limited, we pro-
pose a MaskRefineNet that refines noise in rough masks.
We conduct extensive experiments on COCO and BDD100K
datasets, and the proposed method achieves promising re-
sults comparable to those of the fully-supervised model,
even with 50% of the fully labeled COCO data (38.8% vs.
39.7%). Moreover, when using as little as 5% of fully la-
beled COCO data, our method shows significantly supe-
rior performance over the state-of-the-art semi-supervised
learning method (33.7% vs. 24.9%). The code is available
athttps://github.com/clovaai/PointWSSIS .
|
1. Introduction
Recently proposed instance segmentation methods [5,
8, 9, 13, 15, 16, 23, 32, 42, 45] have achieved remark-
able performance owing to the availability of abundant
of segmentation labels for training. However, compared
to other label types ( e.g., bounding box or point), seg-
mentation labels necessitate delicate pixel-level annota-
tions, demanding much more monetary cost and human ef-
fort. Consequently, weakly-supervised instance segmen-
tation (WSIS) and semi-supervised instance segmentation
(SSIS) approaches have gained attention to reduce anno-
InputProposal Branch
Mask
BranchMask
Branch
Mask
Branch
True -Positive
(Correct) ProposalFalse -Negative
(Missing) Proposal
False -Positive
(Noise) ProposalFigure 1. Proposals and instance masks . The absence of a pro-
posal leads to the missing mask, even though the mask could be
generated if given the correct proposal (zebra). Also, noise pro-
posal often leads to noisy masks. Our motivation stems from the
bottleneck in the proposal branch, and this paper shows economic
point labels can be leveraged to resolve it.
tation costs. WSIS approaches alternatively utilize inex-
pensive weak labels such as image-level labels [1, 26, 52],
point labels [11, 26, 27] or bounding box labels [19, 31, 43].
Besides, SSIS approaches [47, 51] employ a small amount
of pixel-level (fully) labeled data and a massive amount
of unlabeled data. Although they have shown potential
in budget-efficient instance segmentation, there still exists
a large performance gap between theirs and the results of
fully-supervised learning methods.
Specifically, SSIS approaches often adopt the following
training pipeline: (1) train a base network with fully la-
beled data, (2) generate pseudo instance masks for unla-
beled images using the base network, and (3) train a target
network using both full and pseudo labels. The major chal-
lenge of SSIS approaches comes from the trade-off between
the number of missing ( i.e., false-negative) and noise ( i.e.,
false-positive) samples in the pseudo labels. Namely, some
strategies for reducing false-negatives, which is equivalent
to increasing true-positives, often end up increasing false-
positives accordingly; an abundance of false-negatives or
false-positives in pseudo labels impedes stable convergence
of the target network. However, optimally reducing false-
negatives/positives while increasing true-positives is quite
challenging and remains a significant challenge for SSIS.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11360
To address this challenge, we first revisit the funda-
mental behavior of the instance segmentation framework.
Most existing instance segmentation methods adopt a two-
step inference process as shown in Figure 1: (1) generate
instance proposals where an instance is represented as a
box [9, 16, 21, 32] or point [42, 44–46] in proposal branch,
and (2) produce instance masks for each instance proposal
in mask branch. As shown in Figure 1, if the network fails to
obtain an instance proposal ( i.e.,false-negative proposal), it
cannot produce the corresponding instance mask. Although
the network could represent the instance mask in the mask
branch, the absence of the proposal becomes the bottleneck
for producing the instance mask. From the behavior of the
network, we suppose that addressing the bottleneck in the
proposals is a shortcut to the success of the SSIS.
Motivated by the above observations, we rethink the po-
tential of using point labels as weak supervision. The point
label contains only a one-pixel categorical instance cue but
is budget-friendly as it is as easy as providing image-level
labels by human annotators [3]. We note that the point la-
bel can be leveraged as an effective source to (i) resolve the
performance bottleneck of the instance segmentation net-
work and (ii) optimally balance the trade-off between false-
negative and false-positive proposals. Thus, we formulate a
new practical training scheme, Weakly Semi-Supervised
Instance Segmentation (WSSIS) with point labels . In
the WSSIS task, we utilize a small amount of fully labeled
data and a massive amount of point labeled data for budget-
efficient and high-performance instance segmentation.
Under the WSSIS setting, we filter out the proposals
to keep only true-positive proposals using the point labels.
Then, given the true-positive proposals, we exploit the mask
representation of the network learned by fully labeled data
to produce high-quality pseudo instance masks. For prop-
erly leveraging point labels, we consider the characteristics
of the feature pyramid network (FPN) [34], which consists
of multi-level feature maps for multi-scale instance recog-
nition. Each pyramid level is trained to recognize instances
of particular sizes, and extracting instance masks from un-
fit levels often causes inaccurate predictions, as shown in
Figure 4. However, since point labels do not have instance
size information, we handle this using an effective strat-
egy named Adaptive Pyramid-Level Selection. We estimate
which level is the best fit based on the reliability of the net-
work ( i.e., confidence score) and then adaptively produce
an instance mask at the selected level.
Meanwhile, on an extremely limited amount of fully la-
beled data, the network often fails to sufficiently represent
the instance mask in the mask branch, resulting in rough
and noisy mask outputs. In other words, the true-positive
proposal does not always lead to a true-positive instance
mask in this case. To cope with this limitation, we pro-
pose a MaskRefineNet to refine the rough instance mask.The MaskRefineNet takes three input sources, i.e., image,
rough mask, and point; the image provides visual informa-
tion about the target instance, the rough mask is used as the
prior knowledge to be refined, and the point information ex-
plicitly guides the target instance. Using the richer instruc-
tive input sources, MaskRefineNet can be stably trained
even with a limited amount of fully labeled data.
To demonstrate the effectiveness of our method, we
conduct extensive experiments on the COCO [35] and
BDD100K [49] datasets. When training with half of the
fully labeled images and the rest of the point labeled im-
ages on the COCO dataset ( i.e., 50% COCO), we achieve a
competitive performance with the fully-supervised perfor-
mance (38.8% vs. 39.7%). In addition, when using a small
amount of fully labeled data, e.g., 5% of COCO data, the
proposed method shows much superior performance than
the state-of-the-art SSIS method [47] (33.7% vs. 24.9%).
In summary, the contributions of our paper are
• We show that point labels can be leveraged as effec-
tive weak supervisions for budget-efficient and high-
performance instance segmentation. Further, based on
this observation, we establish a new training protocol
named Weakly Semi-Supervised Instance Segmenta-
tion (WSSIS) with point labels.
• To further boost the quality of the pseudo instance
masks when the amount of fully labeled data is
extremely limited, we propose the MaskRefineNet,
which refines noisy parts of the rough instance masks.
• Extensive experimental results show that the proposed
method can achieve competitive performance to those
of the fully-supervised models while significantly out-
performing the semi-supervised methods.
|
Liu_Generating_Anomalies_for_Video_Anomaly_Detection_With_Prompt-Based_Feature_Mapping_CVPR_2023
|
Abstract
Anomaly detection in surveillance videos is a challeng-
ing computer vision task where only normal videos are
available during training. Recent work released the first
virtual anomaly detection dataset to assist real-world detec-
tion. However, an anomaly gap exists because the anoma-
lies are bounded in the virtual dataset but unbounded in the
real world, so it reduces the generalization ability of the
virtual dataset. There also exists a scene gap between vir-
tual and real scenarios, including scene-specific anomalies
(events that are abnormal in one scene but normal in an-
other) and scene-specific attributes, such as the viewpoint
of the surveillance camera. In this paper, we aim to solve
the problem of the anomaly gap and scene gap by proposing
a prompt-based feature mapping framework (PFMF). The
PFMF contains a mapping network guided by an anomaly
prompt to generate unseen anomalies with unbounded types
in the real scenario, and a mapping adaptation branch to
narrow the scene gap by applying domain classifier and
anomaly classifier. The proposed framework outperforms
the state-of-the-art on three benchmark datasets. Exten-
sive ablation experiments also show the effectiveness of our
framework design.
|
1. Introduction
Video anomaly detection (V AD) aims to identify abnor-
mal scenarios in surveillance videos with broad applications
in public security. However, due to the small probability of
occurrence, abnormal events are difficult to be observed in
real-life surveillance. The challenge increases because of
the unconstrained nature of abnormal events. Given a spe-
cific scenario, the event different from normal events can
all be regarded as anomalies, so the anomaly type is un-
bounded.
*Corresponding authorMost V AD approaches address this challenge by learn-
ing the distribution of normal events in the training stage
and detecting the out-of-distribution events in the testing
stage. These methods are categorized into reconstruction-
based methods [1, 14, 31] to reconstruct the current frame
and prediction-based methods [26, 27, 27, 30, 34] to predict
the upcoming frame. Significant reconstruction or predic-
tion error is regarded as an anomaly. However, due to the
strong generalization ability of the deep networks and the
similarity between normal and abnormal events, the anoma-
lies do not always lead to enough error to be detected. With-
out prior knowledge of abnormal distribution, it is difficult
for the network to detect unseen anomalies.
Therefore, instead of calculating error with the distri-
bution of normal behaviors, some methods [11, 12, 53, 54]
try to generate pseudo anomalies to simulate the distribu-
tion of abnormal behaviors. For example, Georgescu et
al. [12] collect a large number of images from Tiny Ima-
geNet unrelated to the detection scenario as pseudo anoma-
lous samples. Their other work [11] tries to generate tempo-
ral anomalies by reversing the action order or motion irreg-
ularity by extracting intermittent frames. The network can
get a glimpse of the feature distribution different from nor-
mal events by manually applying pseudo anomalies. How-
ever, the main drawback of these methods is the unavoid-
able gap between pseudo and natural anomalies.
To solve the problem of pseudo anomalies, Acsintoae et
al. [2] released a virtual V AD dataset named Ubnormal us-
ing 3D animations and 2D background images. It contains
22 types of anomaly, such as fighting, stealing, laying down,
etc. The distribution of real anomalies can be well eval-
uated by applying the virtual dataset. However, applying
virtual anomalies to real scenarios is a great challenge due
to the large domain gap. Acsintoae et al. [2] train a Cycle-
GAN [60] to achieve video-level style transfer from virtual
to the real domain to address the challenge.
However, existing methods fail to address two key chal-
lenges. Firstly, the anomalies are bounded in the virtual
dataset but unbounded in the real world, and we define
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24500
Feature
Extractorr
norS
v
norS
v
abnSReal domain
Normal Feature
Virtual domain
Normal Feature
Virtual domain
Abnormal FeatureReal domain
Abnormal Feature
Virtual domain
Mapped FeaturePrompt-based Feature Mapping
Mapping
Network
Anomaly Prompt
MAE
LossMapping
Adaptation
BranchFigure 1. An overview of prompt-based feature mapping framework (PFMF). The PFMF totally contains three parts, i.e., feature extractor,
prompt-based feature mapping network, and mapping adaptation branch. The feature extractor is used to transform the input instances into
corresponding features, so the mapping process can be completed at the feature level. The prompt-based feature mapping network aims to
map normal features into abnormal feature space under the same domain guided by an anomaly prompt, so the unseen anomalies in the real
domain can be generated from normal features. The mapping adaptation branch is added to make the generated anomalies scene-specific
and solve the problem of scene-specific attributes.
this difference as anomaly gap . Secondly, different scenar-
ios have scene-specific anomalies (events that are abnormal
in one scene but normal in another) and scene-specific at-
tributes (such as the viewpoint of the surveillance camera),
and we define this difference as scene gap .
Our work is motivated by the above two key challenges.
To solve the problem of anomaly gap and scene gap, we pro-
pose a novel framework named prompt-based feature map-
ping framework (PFMF), as shown in Fig. 1. In terms of
narrowing the anomaly gap, the PFMF employs a prompt-
guided mapping network to generate unbounded anomalies
through a divergent mapping process. The prompts are sam-
pled from distribution learned by a variational auto-encoder
(V AE) [17]. As for the scene gap, we introduce a mapping
adaptation branch to solve it. In detail, the branch consists
of an anomaly classifier to make the generated anomalies
scene-specific, and two domain classifiers to reduce the in-
consistency caused by scene-specific attributes.
In summary, this paper makes the following contribu-
tions:
(1) Proposing a novel prompt-based feature mapping
framework (PFMF) for video anomaly detection. This
framework addresses the challenge of applying virtual V AD
datasets with limited anomalies to the real scenario by gen-
erating unseen anomalies with unbounded types.
(2) Proposing a mapping adaptation branch to ensure the
anomalies generated by PFMF are scene-specific and solve
the problem of scene-specific attributes.
(3) Showing the effectiveness of the proposed framework
on three public V AD datasets, ShanghaiTech, Avenue, and
UCF-Crime. Extensive experiments show that the proposed
framework performs the best compared with the state-of-the-art.
|
Lin_One-Stage_3D_Whole-Body_Mesh_Recovery_With_Component_Aware_Transformer_CVPR_2023
|
Abstract
Whole-body mesh recovery aims to estimate the 3D hu-
man body, face, and hands parameters from a single im-
age. It is challenging to perform this task with a single
network due to resolution issues, i.e., the face and hands
are usually located in extremely small regions. Existing
works usually detect hands and faces, enlarge their reso-
lution to feed in a specific network to predict the parameter,
and finally fuse the results. While this copy-paste pipeline
can capture the fine-grained details of the face and hands,
the connections between different parts cannot be easily re-
covered in late fusion, leading to implausible 3D rotation
and unnatural pose. In this work, we propose a one-stage
pipeline for expressive whole-body mesh recovery, named
OSX, without separate networks for each part. Specifically,
we design a Component Aware Transformer (CAT) com-
posed of a global body encoder and a local face/hand de-
coder. The encoder predicts the body parameters and pro-
vides a high-quality feature map for the decoder, which per-
forms a feature-level upsample-crop scheme to extract high-
resolution part-specific features and adopt keypoint-guideddeformable attention to estimate hand and face precisely.
The whole pipeline is simple yet effective without any man-
ual post-processing and naturally avoids implausible pre-
diction. Comprehensive experiments demonstrate the effec-
tiveness of OSX. Lastly, we build a large-scale Upper-Body
dataset (UBody) with high-quality 2D and 3D whole-body
annotations. It contains persons with partially visible bod-
ies in diverse real-life scenarios to bridge the gap between
the basic task and downstream applications.
|
1. Introduction
Expressive whole-body mesh recovery aims to jointly es-
timate the 3D human body poses, hand gestures, and fa-
cial expressions from monocular images. It is gaining in-
creasing attention due to recent advancements in whole-
body parametric models ( e.g., SMPL-X [37]). This task is a
key step in modeling human behaviors and has many appli-
cations, e.g., motion capture, human-computer interaction.
Previous research focus on individual tasks of reconstruct-
ing human body [9, 21, 25, 44, 48, 49], face [2, 10, 12, 43],
§Work done during an internship at IDEA; ¶Corresponding author.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21159
or hand [4, 8, 16]. However, whole body mesh recovery is
particularly challenging as it requires accurate estimation of
each part and natural connections between them.
Existing learning-based works [13, 29, 36, 41, 51] use
multi-stage pipelines for body, hand, and face estimation
to achieve the goal of this task. As depicted in Figure 1(a),
these methods typically detect different body parts, crop and
resize each region, and feed them into separate expert mod-
els to estimate the parameters of each part. The multi-stage
pipeline with different estimators for body, hand, and face
results in a complicated system with a large computational
complexity. Moreover, the blocked communications among
different components inevitably cause incompatible config-
urations, unnatural articulation of the mesh, and implau-
sible 3D wrist rotations as they cannot obtain informative
and consistent clues from other components. Some meth-
ods [13,29,51] attempt to alleviate these issues by designing
additional complicated integration schemes or elbow-twist
compensation fusion among individual body parts. How-
ever, these approaches can be regarded as a late fusion strat-
egy and thus have limited ability to enhance each other and
correct implausible predictions.
In this work, we propose a one-stage framework named
OSX for 3D whole-body mesh recovery, as shown in Fig-
ure 1(b), which does not require separate networks for each
part. Inspired by recent advancements in Vision Transform-
ers [11, 47], which are effective in capturing spatial infor-
mation in a plain architecture, we design our pipeline as a
component-aware Transformer (CAT) composed of a global
body encoder and a local component-specific decoder. The
encoder equipped with body tokens as inputs captures the
global correlation, predicts the body parameters, and si-
multaneously provides high-quality feature map for the de-
coder. The decoder utilizes a differentiable upsample-crop
scheme to extract part-specific high-resolution features and
adopt the keypoint-guided deformable attention to precisely
locate and estimate hand and face parameters. The proposed
pipeline is simple yet effective without any manual post-
processing. To the best of our knowledge, this is the first
one-stage pipeline for 3D whole-body estimation. We con-
duct comprehensive experiments to investigate the effects
of the above designs and compare our method, with existing
works on three benchmarks. Results show that OSX outper-
forms the state-of-the-art (SOTA) [29] by 9.5% on AGORA,
7.8% on EHF, and 13.4% on the body-only 3DPW dataset.
In addition, existing popular benchmarks, as illustrated
in the first row of Figure 2, are either indoor single-person
scenes with limited images ( e.g., EHF [37]) or outdoor syn-
thetic scenes (e.g., AGORA [35]), where the people are of-
ten too far from the camera and the hands and faces are fre-
quently obscured. In fact, human pose estimation and mesh
recovery is a fundamental task that benefits many down-
stream applications, such as sign language recognition, ges-ture generation, and human-computer interaction. Many
scenarios, such as talk shows and online classes, are of vital
importance to our daily life yet under-explored. In such sce-
narios, the upper body is a major focus, whereas the hand
and face are essential for analysis. To address this issue, we
build a large-scale upper-body dataset with fifteen human-
centric real-life scenes, as shown in Figure 2(f) to (t). This
dataset contains many unseen poses, diverse appearances,
heavy truncation, interaction, and abrupt shot changes,
which are quite different from previous datasets. Accord-
ingly, we design a systematical annotation pipeline and pro-
vide precise 2D whole-body keypoint and 3D whole-body
mesh annotations. With this dataset, we perform a compre-
hensive benchmarking of existing whole-body estimators.
Our contributions can be summarized as follows.
• We propose a one-stage pipeline, OSX , for 3D whole-
body mesh recovery, which can regress the SMPL-X
parameters in a simple yet effective manner.
• Despite the conceptual simplicity of our one-stage
framework, it achieves the new state of the art on three
popular benchmarks.
• We build a large-scale upper-body dataset, UBody , to
bridge the gap between the basic task and downstream
applications and provide precise annotations, with
which we conduct benchmarking of existing methods.
We hope UBody can inspire new research topics.
|
Liu_DegAE_A_New_Pretraining_Paradigm_for_Low-Level_Vision_CVPR_2023
|
Abstract
Self-supervised pretraining has achieved remarkable
success in high-level vision, but its application in low-level
vision remains ambiguous and not well-established. What
is the primitive intention of pretraining? What is the core
problem of pretraining in low-level vision? In this paper,
we aim to answer these essential questions and establish
a new pretraining scheme for low-level vision. Specifically,
we examine previous pretraining methods in both high-level
and low-level vision, and categorize current low-level vision
tasks into two groups based on the difficulty of data acqui-
sition: low-cost and high-cost tasks. Existing literature has
mainly focused on pretraining for low-cost tasks, where the
observed performance improvement is often limited. How-
ever, we argue that pretraining is more significant for high-
cost tasks, where data acquisition is more challenging. To
learn a general low-level vision representation that can im-
prove the performance of various tasks, we propose a new
pretraining paradigm called degradation autoencoder (De-
gAE). DegAE follows the philosophy of designing pretext
task for self-supervised pretraining and is elaborately tai-
lored to low-level vision. With DegAE pretraining, SwinIR
achieves a 6.88dB performance gain on image dehaze task,
while Uformer obtains 3.22dB and 0.54dB improvement on
dehaze and derain tasks, respectively.
|
1. Introduction
With the phenomenal success of self-supervised pre-
training in natural language processing (NLP), a large num-
ber of attempts have also been proposed in the field of com-
puter vision [20,21,66,67]. The idea behind self-supervised
pretraining is to learn a general visual representation by de-
vising an appropriate pretext task that does not rely on any
manual annotation. Owing to large-scale pretraining, mod-
els with a voracious appetite for data can alleviate the over-
fitting problem and achieve further improvement.
* Corresponding author. Email: [email protected], referring to the philosophy of masked language
modeling (MLM) in NLP [27,51], masked image modeling
(MIM) [20,67] has been proposed and proven to be extraor-
dinarily effective in high-level vision tasks, e.g., image clas-
sification, object detection, and image segmentation. How-
ever, the notion of low-level vision pretraining is not yet
well-established, due to the distinctions between high-level
and low-level vision tasks. Specifically, the representative
high-level vision tasks take fixed-size images as inputs and
predict manually annotated labels as targets [15, 23], while
most low-level vision methods accept low-quality (LQ) im-
ages as inputs and produce high-quality (HQ) images as
targets [31, 78]. More importantly, the annotation man-
ner in low-level vision is quite different. To obtain LQ-
HQ pairs, a wide range of tasks choose to synthesize in-
put LQ images from collected HQ images, such as classi-
cal super-resolution [11] and Gaussian denoise [77]. Based
on the difficulty of paired-data acquisition, we can roughly
categorize low-level vision tasks into two groups: 1) low-
cost task : tasks with low-cost data acquisition ( e.g., super-
resolution), and 2) high-cost task : tasks with high-cost data
acquisition ( e.g., dehaze). This analysis is absent in exist-
ing low-level vision literatures [4, 7, 34]. They only con-
sider low-cost tasks and simply adopt a straightforward pre-
training strategy that has the same objectives as the down-
stream tasks. Such a pretraining paradigm lacks generality
and only brings marginal improvement. In this paper, we
claim that pretraining could potentially be more effective
for high-cost tasks and that a new pretraining paradigm tai-
lored to low-level vision would be highly beneficial.
To this end, we devise a novel pretraining paradigm for
low-level vision. Since the goal of low-level vision is to
process LQ images with various degradations, we propose
a degradation autoencoder (DegAE) to achieve content-
degradation disentanglement and generation. DegAE ac-
cepts an input image with degradation D1and a reference
image with degradation D2. It attempts to transfer the
degradation D2of the reference image to the input image,
obtaining an output image with input image content, but
with degradation D2, as described in Fig. 1. Through such
a learning paradigm, the model is expected to learn both
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23292
clean
blur1
clean
blur4
noiseReference
InputFigure 1. Example results of DegAE pretraining. For instance,
given an input noise image and a reference blur image, DegAE
attempts to transfer the blur degradation to the input image. More
visual examples are illustrated in the supplementary file.
natural image representations and degradation information,
which are the key components in low-level vision. Our
approach follows the philosophy of designing pretext task
for self-supervised pretraining [20, 27]. Firstly, the pretext
task should not depend on the downstream tasks, in order to
achieve the generality and transferability of the pretrained
representations. Secondly, the pretext task should be care-
fully designed to exploit internal structures of data.
To validate the effectiveness of DegAE pretraining, we
choose three representative backbone models (SwinIR [38],
Uformer [64] and Restormer [71]) to conduct experiments.
The results suggest that DegAE pretraining can signif-
icantly improve the model performance. For example,
SwinIR yields a 6.88dB gain on image dehaze task (SOTS)
and a 1.27dB gain on image derain task (Rain100L).
Uformer obtains 3.22dB and 0.54dB improvement on im-
age dehaze and derain task (Test100). Restormer achieves
0.43dB performance improvement on image motion deblur
task (GoPro), respectively. As expected, we also observe
incremental improvement on low-cost tasks – SR and
denoise tasks. We believe our efforts can help to bridge
the gap between high-level and low-level vision tasks and
improve the performance of various low-level vision tasks.
|
Liu_Humans_As_Light_Bulbs_3D_Human_Reconstruction_From_Thermal_Reflection_CVPR_2023
|
Abstract
The relatively hot temperature of the human body causes
people to turn into long-wave infrared light sources. Since
this emitted light has a larger wavelength than visible light,
many surfaces in typical scenes act as infrared mirrors with
strong specular reflections. We exploit the thermal reflec-
tions of a person onto objects in order to locate their posi-
tion and reconstruct their pose, even if they are not visible
to a normal camera. We propose an analysis-by-synthesis
framework that jointly models the objects, people, and their
thermal reflections, which combines generative models with
differentiable rendering of reflections. Quantitative and
qualitative experiments show our approach works in highly
challenging cases, such as with curved mirrors or when the
person is completely unseen by a normal camera.
|
1. Introduction
One of the major goals of the computer vision commu-
nity is to locate people and reconstruct their poses in every-
day environments. What makes thermal cameras particu-
larly interesting for this task is the fact that humans are often
the hottest objects in indoor environments, thus becoming
infrared light sources. Humans have a relatively stable body
temperature of 37 degrees Celcius, which according to the
Stefan-Boltzmann law, turns people into a light source with
constant brightness under long-wave infrared (LWIR). This
makes LWIR images a robust source of signals of human
activities under many different light and camera conditions.
Since infrared light on the LWIR spectrum has a wave-
length that is much longer than visible light (8 µm-14 µm vs.
0.38µm-0.7µm), the objects in typical scenes look qualita-
tively very different from human vision. Many surfaces of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12531
objects in our daily life – such as a ceramic bowl, a stainless
steel fridge, or a polished wooden table top – have stronger
specular reflections than in the visible light spectrum [ 7,58].
Figure 1shows the reflection of a person with the surface of
salad bowls, which is barely visible to the naked eye, if at
all, but clearly salient in the LWIR spectrum.
In cluttered environments, a visible light camera may not
always be able to capture the person, such as due to a limited
field of view or occlusions. In such scenes, the ideal scene
for locating and reconstructing a person would be an envi-
ronment full of mirrors. This is what the world looks like
under the LWIR spectrum. Infrared mirrors are abundant in
the thermal modality, and reflections reveal significant non-
line-of-sight information about the surrounding world.
In this paper, we introduce a method that uses the image
of a thermal reflection in order to reconstruct the position
and pose of a person in a scene. We develop an analysis-
by-synthesis framework to model objects, people, and their
thermal reflections in order to reconstruct people and ob-
jects. Our approach combines generative models with dif-
ferentiable rendering to infer the possible 3D scenes that are
compatible with the observations. Given a thermal image,
our approach optimizes for the latent variables of generative
models such that light emitting from the person will reflect
off the object and arrive at the thermal camera plane.
Our approach works in highly challenging cases where
the object acts as a curved mirror. Even when a person is
completely unseen by a normal visible light camera, our ap-
proach is able to localize and reconstruct their 3D pose from
just their thermal reflection. Traditionally, the increased
specularity of surfaces has posed a challenge to thermog-
raphy, making it extremely difficult to measure the surface
temperature of a thermally specular surface, which brings
out a line of active research aiming to remove the specular
reflection for more accurate surface temperature measure-
ment [ 4,5,40,80]. We instead exploit these “difficulties”
of LWIR to tackle the problem of 3D human reconstruction
from a single view of thermal reflection image.
The primary contribution of the paper is a method to use
the thermal reflection of the human body on everyday ob-
jects to infer their location in a scene and its 3D structure.
The rest of the paper will analyze this approach in detail.
Section 2provides a brief overview of related work for 3D
reconstruction and differentiable rendering. Section 3for-
mulates an integrated generative model of humans and ob-
jects in a scene, then discusses how to perform differen-
tiable rendering of reflection, which we are able to invert to
reconstruct the 3D scene. Section 4analyzes the capabilities
of this approach in the real world. We believe thermal cam-
eras are powerful tools to study human activities in daily en-
vironments, extending computer vision systems’ ability to
function more robustly even under extreme light conditions.
|
Kundu_IS-GGT_Iterative_Scene_Graph_Generation_With_Generative_Transformers_CVPR_2023
|
Abstract
Scene graphs provide a rich, structured representation
of a scene by encoding the entities (objects) and their spa-
tial relationships in a graphical format. This representation
has proven useful in several tasks, such as question answer-
ing, captioning, and even object detection, to name a few.
Current approaches take a generation-by-classification ap-
proach where the scene graph is generated through labeling
of all possible edges between objects in a scene, which adds
computational overhead to the approach. This work in-
troduces a generative transformer-based approach to gen-
erating scene graphs beyond link prediction. Using two
transformer-based components, we first sample a possible
scene graph structure from detected objects and their visual
features. We then perform predicate classification on the
sampled edges to generate the final scene graph. This ap-
proach allows us to efficiently generate scene graphs from
images with minimal inference overhead. Extensive exper-
iments on the Visual Genome dataset demonstrate the effi-
ciency of the proposed approach. Without bells and whis-
tles, we obtain, on average, 20.7%mean recall (mR@100)
across different settings for scene graph generation (SGG),
outperforming state-of-the-art SGG approaches while offer-
ing competitive performance to unbiased SGG approaches.
|
1. Introduction
Graph-based visual representations are becoming in-
creasingly popular due to their ability to encode visual, se-
mantic, and even temporal relationships in a compact rep-
resentation that has several downstream tasks such as ob-
ject tracking [4], scene understanding [17] and event com-
plex visual commonsense reasoning [2, 3, 22]. Graphs can
help navigate clutter and express complex semantic struc-
tures from visual inputs to mitigate the impact of noise,
clutter, and (appearance/scene) variability, which is essen-
tial in scene understanding. Scene graphs, defined as di-
rected graphs that model the visual-semantic relationshipsamong entities (objects) in a given scene, have proven to be
very useful in downstream tasks such as visual question-
answering [14, 34], captioning [17], and even embodied
tasks such as navigation [27], to name a few.
There has been a growing body of work [7, 10, 29, 33,
36, 38, 41] that has focused on the problem of scene graph
generation (SGG), that aims to generate scene graph from
a given input observation. However, such approaches have
tackled the problem by beginning with a fully connected
graph, where all entities interact with each other before
pruning it down to a more compact graph by predicting edge
relationships, or the lack of one, between each pair of local-
ized entities. This approach, while effective, has several
limitations. First, by modeling the interactions between en-
tities with a dense topology, the underlying semantic struc-
ture is ignored during relational reasoning, which can lead
to poor predicate (relationship) classification. Second, by
constructing pairwise relationships between all entities in a
scene, there is tremendous overhead on the predicate classi-
fication modules since the number of pairwise comparisons
can grow non-linearly with the increase in the number of
detected concepts. Combined, these two issues aggravate
the existing long-tail distribution problem in scene graph
generation. Recent progress in unbiasing [21, 31–33] has
attempted to address this issue by tackling the long-tail dis-
tribution problem. However, they depend on the quality of
the underlying graph generation approaches, which suffer
from the above limitations.
In this work, we aim to overcome these limitations
using a two-stage, generative approach called IS-GGT,
a transformer-based iterative scene graph generation ap-
proach. An overview of the approach is illustrated in Fig-
ure 1. Contrary to current approaches to SGG, we leverage
advances in generative graph models [5, 23] to first sam-
ple the underlying interaction graph between the detected
entities before reasoning over this sampled semantic struc-
ture for scene graph generation. By decoupling the ideas
of graph generation and relationship modeling, we can con-
strain the relationship classification process to consider only
those edges (pairs of entities) that have a higher probability
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6292
Relation HypothesesWindow
TreeBuilding
Entity Localization Input ImageGrass Man Tree WindowEntity Hypotheses
GrassMan Frisbee
Window
XX TreeBuilding
GrassPlaying
Standing
OnBehindMan Frisbee
Has
X
X
Relation PredictionGrassStanding onPlaying
BehindMan Frisbee
hasBuilding
Window
Scene Graph GenerationProposed Generative ApproachFigure 1. Our goal is to move towards a generative model for scene graph generation using a two-stage approach where we first sample the
underlying semantic structure between entities before predicate classification. This is different from the conventional approach of modeling
pairwise relationships among all detected entities and helps constrain the reasoning to the underlying semantic structure.
of interaction (both semantic and visual) and hence reduce
the computational overhead during inference. Additionally,
the first step of generative graph sampling (Section 3.2) al-
lows us to navigate clutter by rejecting detected entities that
do not add to the semantic structure of the scene by iter-
atively constructing the underlying entity interaction graph
conditioned on the input image. A relation prediction model
(Section 3.3) reasons over this constrained edge list to clas-
sify the relationships among interacting entities. Hence,
the relational reasoning mechanism only considers the (pre-
dicted) global semantic structure of the scene and makes
more coherent relationship predictions that help tackle the
long-tail distribution problem without additional unbiasing
steps and computational overhead.
Contributions. The contributions of this paper are
three-fold: (i) we are among the first to tackle the prob-
lem of scene graph generation using a graph generative
approach without constructing expensive, pairwise compar-
isons between all detected entities, (ii) we propose the idea
of iterative interaction graph generation and global, contex-
tualized relational reasoning using a two-stage transformer-
based architecture for effective reasoning over cluttered,
complex semantic structures, and (iii) through extensive
evaluation on Visual Genome [19] we show that the
proposed approach achieves state-of-the-art performance
(without unbiasing) across all three scene graph generation
tasks while considering only 20% of all possible pairwise
edges using an effective graph sampling approach.
|
Liu_FlowGrad_Controlling_the_Output_of_Generative_ODEs_With_Gradients_CVPR_2023
|
Abstract
Generative modeling with ordinary differential equa-
tions (ODEs) has achieved fantastic results on a variety of
applications. Yet, few works have focused on controlling
the generated content of a pre-trained ODE-based genera-
tive model. In this paper, we propose to optimize the out-
put of ODE models according to a guidance function to
achieve controllable generation. We point out that, the gra-
dients can be efficiently back-propagated from the output
to any intermediate time steps on the ODE trajectory, by
decomposing the back-propagation and computing vector-
Jacobian products. To further accelerate the computation
of the back-propagation, we propose to use a non-uniform
discretization to approximate the ODE trajectory, where
we measure how straight the trajectory is and gather the
straight parts into one discretization step. This allows us
to save ∼90% of the back-propagation time with ignor-
able error. Our framework, named FlowGrad, outperforms
the state-of-the-art baselines on text-guided image manip-
ulation. Moreover, FlowGrad enables us to find global se-
mantic directions in frozen ODE-based generative models
that can be used to manipulate new images without extra
optimization.
|
1. Introduction
Controllable generation is very important for image edit-
ing [2, 11, 43], text-guided image manipulation [19, 30, 33],
etc.. Traditionally, we use GAN and optimize the latent em-
bedding with the desirable objective functions [2, 7, 11, 29,
31, 38, 49]. But the disadvantage is that, it is difficult to
embed the image into the GAN space honestly and the per-
formance is limited by the pre-trained GANs, which suffer
from training instability and mode collapse.
Recently, diffusion models (or stochastic differential
equation (SDE)-based generative models) has been popu-
lar [9, 12, 30, 39, 40, 42, 45], and there has been a number of
works on controlled generation based on diffusion models,
such as [9, 28]. But due to the diffusion noise, it is hard to
accurately control the output, especially when it comes tooptimizing a complex loss function such as CLIP [35]. To
achieve controlled generation, existing methods either re-
quires training a noised version of the guidance [9, 30, 40],
or fine-tuning the whole diffusion model [13, 19].
In contrast, ordinary differential equation (ODE)-based
generative models represent a simpler alternative than dif-
fusion without involving the diffusion noise and the Ito
calculus machinery. Recently, it has been shown that 1)
ODEs can be trained directly without resorting to SDEs,
and 2) ODE can perform comparable or even better than
SDE [16, 21, 36, 44].
Due to the deterministic nature of ODEs, they form an
ideal model for controlled generation, as they enjoy both
the rich latent space as SDEs and the explicit optimization
framework as GANs. The goal of this work is to fully ex-
plore its potential in terms of controlled generation, with un-
conditioned pre-trained ODEs. Technically, 1) We present a
simple way to control the output of ODE-based deep gener-
ative models with gradients; 2) We present a novel strategy
to speedup the gradient computation by explore the straight-
ness of ODE trajectories. By measuring the straightness at
each time step during the simulation with Euler discretiza-
tion, we can approximate the ODE trajectory with a few-
step non-uniform discretization, and consequently reduce a
great amount of time in back-propagation.
Our fast gradient computation scheme, named Flow-
Grad, allows us to efficiently control the generated con-
tents of ODEs with any differentiable loss functions. In
particular, we test FlowGrad on a challenging objective
function, the CLIP loss, to manipulate user-provided im-
ages with text prompts. Moreover, by optimizing a set of
training images together, FlowGrad can find semantically
meaningful global directions in pre-trained ODE models,
which allow manipulating new images for free. Equipped
with advanced ODE-based generative models, FlowGrad
outperforms state-of-the-art CLIP-guided diffusion models
and GANs.
|
Liu_Delving_Into_Discrete_Normalizing_Flows_on_SO3_Manifold_for_Probabilistic_CVPR_2023
|
Abstract
Normalizing flows (NFs) provide a powerful tool to con-
struct an expressive distribution by a sequence of trackable
transformations of a base distribution and form a proba-
bilistic model of underlying data. Rotation, as an important
quantity in computer vision, graphics, and robotics, can ex-
hibit many ambiguities when occlusion and symmetry oc-
cur and thus demands such probabilistic models. Though
much progress has been made for NFs in Euclidean space,
there are no effective normalizing flows without discontinu-
ity or many-to-one mapping tailored for SO(3) manifold.
Given the unique non-Euclidean properties of the rotation
manifold, adapting the existing NFs to SO(3) manifold is
non-trivial. In this paper, we propose a novel normaliz-
ing flow on SO(3) by combining a Mobius transformation-
based coupling layer and a quaternion affine transforma-
tion. With our proposed rotation normalizing flows, one
can not only effectively express arbitrary distributions on
SO(3) , but also conditionally build the target distribution
given input observations. Extensive experiments show that
our rotation normalizing flows significantly outperform the
baselines on both unconditional and conditional tasks.
|
1. Introduction
Endowing a neural network with the ability to express
uncertainty along with the prediction is of crucial influence
to safety and interpretability-critical systems and provides
valuable information for downstream tasks [4, 19, 32]. As
a widely used technique in computer vision and robotics,
rotation regression can also benefit from such uncertainty-
aware predictions and enable many applications [5, 14, 31].
To this end, recent years have witnessed much effort in
modeling the uncertainty of rotation via probabilistic mod-
eling of the SO(3) space, including von Mises distribu-
tion for Euler angles [27], Bingham distribution for quater-
*Equal contribution
†He Wang and Baoquan Chen are the corresponding authors ( {hewang,
baoquan }@pku.edu.cn).nions [6, 13], matrix Fisher distribution for rotation ma-
trices [24], etc. Those distributions are all single-modal,
which fall short on modeling objects with continuous sym-
metry, which are ubiquitous in our daily life. Taking cupas
an example, it exhibits rotational symmetry for which mod-
eling with the unimodal or the mixture of distributions is
clearly insufficient. How to model an arbitrary distribution
onSO(3) manifold is still a challenging open problem.
Normalizing flows [28], which maps samples from a
simple base distribution to the target distributions via in-
vertible transformations, provides a flexible way to express
complex distributions and has been widely used in express-
ing arbitrary distributions in Euclidean space [1, 2, 7, 8, 16,
17]. However, developing normalizing flows on SO(3)
manifold is still highly under-explored.
Some works rely on normalizing flows in Euclidean
space and adapt them to handle rotations. ReLie [9] pro-
poses normalizing flows for general Lie group via Lie alge-
bra in Euclidean space. However, it suffers from discontin-
uous rotation representations [37] and leads to inferior per-
formance. ProHMR [18] considers rotations as 6D vectors
in Euclidean space and leverages Glow [17] to model distri-
butions, where a many-to-one Gram-Schmidt projection is
needed to close the gap between the two spaces. Although
composed of bijective transformations in Euclidean space,
the many-to-one mapping from Euclidean space to SO(3)
breaks the one-to-one regulation of normalizing flows [28].
Other works propose general normalizing flows for non-
Euclidean spaces. Mathieu et al. [23], Lou et al. [21] and
Falorsi et al. [10] propose continuous normalizing flows
for general Riemannian manifold, without considering any
property of SO(3) space, which leads to unsatisfactory per-
formance for probabilistic rotation modeling. Rezende et al.
[29] introduce normalizing flows on tori and spheres. Note
that despite unit quaternions lying on S3space, [29] does
not exhibit the antipodal symmetry property of quaternions
and thus is not suitable for modeling rotations in SO(3) .
In this work, we introduce novel discrete normalizing
flows for rotations on the SO(3) manifold. The core build-
ing block of our discrete rotation normalizing flow consists
of a Mobius coupling layer with rotation matrix representa-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21264
tion and an affine transformation with quaternion represen-
tation, linked by conversions between rotations and quater-
nions. In the Mobius coupling layer, one column of the ro-
tation matrix acts as the conditioner , remaining unchanged.
Another column serves as the transformer , undergoing Mo-
bius transformation conditioned on the conditioner, while
the remaining column is determined by the cross-product.
By combining multiple Mobius coupling layers, we en-
hance the capacity of our vanilla design for rotation nor-
malizing flows.
To further increase expressivity, we propose an affine
transformation in quaternion space. This quaternion affine
transformation complements the Mobius coupling layer,
functioning as both a global rotation and a means for con-
densing or dilating the local likelihood. Despite quaternions
being a double coverage of the SO(3) manifold, this trans-
formation remains bijective and diffeomorphic to SO(3) .
We conduct extensive experiments to validate the ex-
pressivity and stability of our proposed rotation normaliz-
ing flows. The results show that our rotation normalizing
flows are able to either effectively fit the target distributions
onSO(3) with distinct shapes, or regress the target distri-
bution given input image conditions. Our method achieves
superior performance on both tasks over all the baselines.
|
Li_Less_Is_More_Reducing_Task_and_Model_Complexity_for_3D_CVPR_2023
|
Abstract
Whilst the availability of 3D LiDAR point cloud data has
significantly grown in recent years, annotation remains ex-
pensive and time-consuming, leading to a demand for semi-
supervised semantic segmentation methods with application
domains such as autonomous driving. Existing work very
often employs relatively large segmentation backbone net-
works to improve segmentation accuracy, at the expense of
computational costs. In addition, many use uniform sam-
pling to reduce ground truth data requirements for learning
needed, often resulting in sub-optimal performance. To ad-
dress these issues, we propose a new pipeline that employs a
smaller architecture, requiring fewer ground-truth annota-
tions to achieve superior segmentation accuracy compared
to contemporary approaches. This is facilitated via a novel
Sparse Depthwise Separable Convolution module that signif-
icantly reduces the network parameter count while retaining
overall task performance. To effectively sub-sample our
training data, we propose a new Spatio-Temporal Redun-
dant Frame Downsampling (ST-RFD) method that leverages
knowledge of sensor motion within the environment to ex-
tract a more diverse subset of training data frame samples.
To leverage the use of limited annotated data samples, we
further propose a soft pseudo-label method informed by Li-
DAR reflectivity. Our method outperforms contemporary
semi-supervised work in terms of mIoU, using less labeled
data, on the SemanticKITTI (59.5@5%) and ScribbleKITTI
(58.1@5%) benchmark datasets, based on a 2.3 ×reduction
in model parameters and 641 ×fewer multiply-add oper-
ations whilst also demonstrating significant performance
improvement on limited training data (i.e., Less is More).
|
1. Introduction
3D semantic segmentation of LiDAR point clouds has played
a key role in scene understanding, facilitating applications
such as autonomous driving [6, 23, 26, 28, 46, 60, 63] and
robotics [3, 38, 51, 52]. However, many contemporary meth-
ods require relatively large backbone architectures with mil-
lions of trainable parameters requiring many hundred giga-
bytes of annotated data for training at a significant compu-
tational cost. Considering the time-consuming and costly
nature of 3D LiDAR annotation, such methods have become
less feasible for practical deployment.
Cylinder3DOzan et al.2DPASSMinkowskiNetSPVNASLiM3D+SDSC(ours)LiM3D(ours)
50500500050000Multiply-Adds (Millions)●SemanticKITTI●ScribbleKITTIOzan et al.2DPASS2DVNASCylinder3DMinkowskiNetLiM3D+SDSC(ours)LiM3D(ours)
30354045505560
0204060mIoU (%)# Parameters (Millions)Figure 1. mIoU performance (%) against parameters and multiply-
add operations on SemanticKITTI (fully annotated) and Scrib-
bleKITTI (weakly annotated) under the 5% sampling protocol.
Existing supervised 3D semantic segmentation meth-
ods [13, 31, 38, 44, 51, 52, 56, 58, 63] primarily focus on
designing network architectures for densely annotated data.
To reduce the need for large-scale data annotation, and in-
spired by similar work in 2D [11, 47, 49], recent 3D work
proposes efficient ways to learn from weak supervision [46].
However, such methods still suffer from high training costs
and inferior on-task performance. To reduce computational
costs, a 2D projection-based point cloud representation is
often considered [3, 14, 31, 38, 51, 52, 56, 61], but again at
the expense of significantly reduced on-task performance.
As such, we observe a gap in the research literature for the
design of semi or weakly supervised methodologies that
employ a smaller-scale architectural backbone, hence facili-
tating improved training efficiency whilst also reducing their
associated data annotation requirements.
In this paper, we propose a semi-supervised methodology
for 3D LiDAR point cloud semantic segmentation. Facili-
tated by three novel design aspects, our Less is More (LiM)
based methodologies require lesstraining data and lesstrain-
ing computation whilst offering ( more ) improved accuracy
over contemporary state-of-the-art approaches (see Fig. 1).
Firstly, from an architectural perspective, we propose a
novel Sparse Depthwise Separable Convolution (SDSC)
module, which substitutes traditional sparse 3D convolu-
tion into existing 3D semantic segmentation architectures,
resulting in a significant reduction in trainable parameters
and numerical computation whilst maintaining on-task per-
formance (see Fig. 1). Depthwise Separable Convolution
has shown to be very effective within image classification
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9361
tasks [12]. Here, we tailor a sparse variant of 3D Depthwise
Separable Convolution for 3D sparse data by first applying
a single submanifold sparse convolutional filter [20, 21] to
each input channel with a subsequent pointwise convolution
to create a linear combination of the sparse depthwise convo-
lution outputs. This work is the first to attempt to introduce
depthwise convolution into the 3D point cloud segmentation
field as a conduit to reduce model size. Our SDSC module
facilitates a 50% reduction in trainable network parameters
without any loss in segmentation performance.
Secondly, from a training data perspective, we propose
a novel Spatio-Temporal Redundant Frame Downsam-
pling (ST-RFD) strategy that more effectively sub-samples
a set of diverse frames from a continuously captured LiDAR
sequence in order to maximize diversity within a minimal
training set size. We observe that continuously captured
LiDAR sequences often contain significant temporal redun-
dancy, similar to that found in video [2], whereby temporally
adjacent frames provide poor data variation. On this basis,
we propose to compute the temporal correlation between ad-
jacent frame pairs, and use this to select the most informative
sub-set of LiDAR frames from a given sequence. Unlike pas-
sive sampling ( e.g., uniform or random sampling), our active
sampling approach samples frames from each sequence such
that redundancy is minimized and hence training set diver-
sity is maximal. When compared to commonplace passive
random sampling approaches [29,32,46], ST-RFD explicitly
focuses on extracting a diverse set of training frames that
will hence maximize model generalization.
Finally, in order to employ semi-supervised learning, we
propose a soft pseudo-label method informed by the LiDAR
reflectivity response, thus maximizing the use of any an-
notated data samples. Whilst directly using unreliable soft
pseudo-labels generally results in performance deteriora-
tion [5], the voxels corresponding to the unreliable predic-
tions can instead be effectively leveraged as negative samples
of unlikely categories. Therefore, we use cross-entropy to
separate all voxels into two groups, i.e., a reliable and an un-
reliable group with low and high-entropy voxels respectively.
We utilize predictions from the reliable group to derive pos-
itive pseudo-labels, while the remaining voxels from the
unreliable group are pushed into a FIFO category-wise mem-
ory bank of negative samples [4]. To further assist semantic
segmentation of varying materials in the situation where we
have weak/unreliable/no labels, we append the reflectivity
response features onto the point cloud features, which again
improve segmentation results.
We evaluate our method on the SemanticKITTI [7] and
ScribbleKITTI [46] validation set. Our method outperforms
contemporary state-of-the-art semi- [29,32] and weakly- [46]
supervised methods and offers more in terms of performance
on limited training data, whilst using lesstrainable parame-
ters and lessnumerical operations ( Less is More ).Overall, our contributions can be summarized as follows:
•A novel methodology for semi-supervised 3D LiDAR
semantic segmentation that uses significantly lesspa-
rameters and offers ( more ) superior accuracy.1
•A novel Sparse Depthwise Separable Convolution
(SDSC) module, to reduce trainable network param-
eters, and to both reduce the likelihood of over-fitting
and facilitate a deeper network architecture.
•A novel Spatio-Temporal Redundant Frame Downsam-
pling (ST-RFD) strategy, to extract a maximally diverse
data subset for training by removing temporal redun-
dancy and hence future annotation requirements.
•A novel soft pseudo-labeling method informed by Li-
DAR reflectivity as a proxy to in-scene object material
properties, facilitating effective use of limited data an-
notation.
|
Lin_Bit-Shrinking_Limiting_Instantaneous_Sharpness_for_Improving_Post-Training_Quantization_CVPR_2023
|
Abstract
Post-training quantization (PTQ) is an effective com-
pression method to reduce the model size and computa-
tional cost. However, quantizing a model into a low-bit
one, e.g., lower than 4, is difficult and often results in non-
negligible performance degradation. To address this, we
investigate the loss landscapes of quantized networks with
various bit-widths. We show that the network with more
ragged loss surface, is more easily trapped into bad local
minima, which mostly appears in low-bit quantization. A
deeper analysis indicates, the ragged surface is caused by
the injection of excessive quantization noise. To this end,
we detach a sharpness term from the loss which reflects the
impact of quantization noise. To smooth the rugged loss
surface, we propose to limit the sharpness term small and
stable during optimization. Instead of directly optimizing
the target bit network, we design a self-adapted shrinking
scheduler for the bit-width in continuous domain from high
bit-width to the target by limiting the increasing sharpness
term within a proper range. It can be viewed as iteratively
adding small “instant” quantization noise and adjusting the
network to eliminate its impact. Widely experiments includ-
ing classification and detection tasks demonstrate the ef-
fectiveness of the Bit-shrinking strategy in PTQ. On the Vi-
sion Transformer models, our INT8 and INT6 models drop
within 0.5% and 1.5% Top-1 accuracy, respectively. On
the traditional CNN networks, our INT4 quantized models
drop within 1.3% and 3.5% Top-1 accuracy on ResNet18
and MobileNetV2 without fine-tuning, which achieves the
state-of-the-art performance.
|
1. Introduction
In recent years, network compression, such as Knowl-
edge distillation [16], Pruning [9, 17, 38], and Quantiza-
tion [30, 34, 35, 45] are rapidly developing to achieve effi-
cient inference for Deep Neural Networks (DNNs) both at
Figure 1. The loss landscapes of the full-precision and various
bits ResNet-20 on CIFAR-100. We plot the loss landscapes using
the visualization methods in [18]. The landscape of the lower-bit
quantized network is more ragged, and is more easily trapped into
bad local minima.
edge devices and in the clouds. Among them, quantization
is a promising as well as hardware-friendly approach. It re-
duces the computation complexity and memory footprint by
representing weights and activations with low-bit integers.
Traditional approaches often perform a Quantization-
aware Training (QAT) [35–37, 45] process to achieve guar-
anteed accuracy. However, the long time re-training and
the requirements of full training data consume unaccept-
able computation and storage resources, making it imprac-
tical in industry. On the contrary, Post Training Quantiza-
tion (PTQ) [2, 12, 25, 26, 43] is fast and light. PTQ effi-
ciently turns a pre-trained full precision network to quan-
tized one with only a small calibration set. Although more
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16196
user-friendly, PTQ suffers performance degradation when
pursuing ultra low compression ratio [28, 39] (e.g. lower
than 4 bits). This is because only the magnitude informa-
tion of weights and activations is utilized by simply apply-
ing rounding-to-nearest in predominant approaches. The ir-
reparable quantization noise will accumulate layer by layer
throughout the network.
To this end, some layer-wise or block-wise
reconstruction-based algorithms (e.g. AdaRound [26],
Bit-Split [28], AdaQuant [12], QDrop [39], BRECQ [19],
UWC [20]) are proposed, which greatly improve the
accuracy when the quantized weights go down to 4-bit. An
analysis [26] on the second order Taylor series expansion
of the task loss indicates reconstruction-based methods
could reduce the error introduced by rounding because
they leverage the interaction between weights. QDrop [39]
proposes to take activation quantization into consideration
to increase the flatness of loss landscape. Nevertheless,
when the compression rate goes higher , e.g., the activations
quantized to 4 bits or the weights quantized to 2 bits, or
compressing more complex models, e.g., the recent prevail-
ing Vision Transformer models [6, 23, 33], it still remains a
non-negligible accuracy gap with original model [24, 42].
Fig. 1 shows that, as the bit-width goes lower, the loss land-
scapes have more ragged surface. Therefore, in lower bits’
optimization process, the networks are easily trapped into
bad local minima and result in performance degradation.
As mentioned above, optimizing a low-bit model in
PTQ is very challenging. In forward process, full-precision
weights and activations are quantized to a small set of fixed-
point values, which introduces large quantization noise to
the network. The quantization noise causes the loss dis-
torted, i.e. a rugged loss landscape shown in Fig. 1. As a
consequence, the distorted loss makes the optimization un-
stable, misleading the network to poor local minima. In
QAT, some progressive approaches [14, 47] are proposed.
For instance, CTMQ [14] trains multiple precision quan-
tized models from high-bit to low-bit, and use the weight of
trained higher bit model to initialize the next low-bit model.
The progressive process is experimentally verified to be
helpful for the quantized network to reduce the quantiza-
tion noise, resulting in better local minima. In QAT, to fully
optimize the low-bit quantization network, the progressive
approaches preset a complex bit dropping schedule, includ-
ing long precision sequence and tedious training iterations.
With multiplied training cost, traditional progressive meth-
ods are not practical in PTQ.
In this paper, a sharpness term, detaching the quantiza-
tion noise’s impact on loss, is defined to precisely estimate
the degree of the loss distortion. Based on the sharpness
term, a self-adapted progressive quantization scheduler for
PTQ, named Bit-shrinking, is designed to help the low-bit
model find better local minima. Instead of directly quan-tizing the network to the target low bit-width, Bit-shrinking
relaxes bit-width to continuous value and gradually shrink
it to the target bit-width during optimization to limit the
sharpness. Each time the bit-width is shrunk, the “instant”
sharpness introduced is limited within a preset threshold.
Shrinking the bit-width and adjusting the weights are itera-
tively performed until the target bit arrives. Consequently,
with the “instant” sharpness term limited, the loss surface is
smoothed which helps to find good minima. Different from
traditional progressive approaches which evenly drops the
integer bit-width, Bit-shrinking has more proper dropping
scheduler on continuous bit-width. It tackles the additional
training cost issue by alleviating over-optimizing some triv-
ial bit-width.
This paper proposes a novel and practical Post-Training
quantization framework to improve the accuracy of low-bit
quantized network. The motivation is to reduce the impact
of quantization noise during optimization by a Bit-shrinking
strategy. We show that Bit-shrinking can help the low-bit
network to find better local minima comparing with the
direct quantization approach. Our main contributions are
summarized as follows:
• Based on the observation of the loss landscape in low-
bit quantization, we find that excessive sharpness of
the loss landscape misleads the optimization direction.
To precisely estimate and further limit the impact of
quantization noise during optimization, we detach a
sharpness term from the loss.
• To calibrate the optimization direction, we propose
Bit-shrinking, a self-adapted progressive quantization
scheduler for PTQ, to limit sharpness term small and
stable during optimization. The landscape is smoothed
as the progressive scheduler iteratively adds small “in-
stant” sharpness and adjusts the network. As a re-
sult, Bit-shrinking helps quantized low-bit network
find better local minima comparing with direct quanti-
zation approach, and results in better performance.
• Widely experiments, including Vision Transformer
and CNN classification models on ImageNet dataset
and Faster RCNN, RetinaNet detection models on
COCO dataset, demonstrate the superiority of the Bit-
shrinking without end-to-end fine-tuning. On the Vi-
sion Transformer models, our INT8 and INT6 mod-
els drop within 0.5% and 1.5% Top-1 accuracy, re-
spectively. Our INT4 quantized model drops within
1.3% and 3.5% Top-1 accuracy on ResNet18 and Mo-
bileNetV2, which achieves the SOTA performance.
|
Liu_Detecting_Backdoors_During_the_Inference_Stage_Based_on_Corruption_Robustness_CVPR_2023
|
Abstract
Deep neural networks are proven to be vulnerable to
backdoor attacks. Detecting the trigger samples during the
inference stage, i.e., the test-time trigger sample detection,
can prevent the backdoor from being triggered. However,
existing detection methods often require the defenders to
have high accessibility to victim models, extra clean data,
or knowledge about the appearance of backdoor triggers,
limiting their practicality.
In this paper, we propose the te st-time co rruption ro-
bustness consistency evaluation (TeCo)1, a novel test-time
trigger sample detection method that only needs the hard-
label outputs of the victim models without any extra infor-
mation. Our journey begins with the intriguing observa-
tion that the backdoor-infected models have similar per-
formance across different image corruptions for the clean
images, but perform discrepantly for the trigger samples.
Based on this phenomenon, we design TeCo to evaluate test-
time robustness consistency by calculating the deviation of
severity that leads to predictions’ transition across different
corruptions. Extensive experiments demonstrate that com-
pared with state-of-the-art defenses, which even require ei-
ther certain information about the trigger types or acces-
sibility of clean data, TeCo outperforms them on different
backdoor attacks, datasets, and model architectures, enjoy-
ing a higher AUROC by 10% and5times of stability.
1https://github.com/CGCL-codes/TeCo
|
1. Introduction
Backdoor attacks have been shown to be a threat to
deep neural networks (DNNs) [14, 26, 32, 38]. A backdoor-
infected DNN will perform normally on clean input data,
but output the adversarially desirable target label when the
input data are tampered with a special pattern ( i.e., the back-
door trigger), which may cause serious safety issues.
A critical dependency of a successful backdoor attack
is that the attacker must provide the samples with back-
door triggers (we call them trigger samples for short here-
after) to the infected models on the inference stage, other-
wise, the backdoor will not be triggered. Thus, one way
to counter the backdoor attacks is to judge whether the test
data have triggers on it, i.e., the test-time trigger sample de-
tection (TTSD) defense2[5, 12, 42]. This kind of defense
can work corporately with other backdoor defenses such as
model diagnosis defense [9, 15, 46] or trigger reverse engi-
neering [40, 43], and also provide prior knowledge of the
trigger samples in a comprehensive defense pipeline, which
can help the down-steam defenses to statistically analyze
the backdoor samples and mitigate the backdoor more ef-
fectively.
On the other hand, the TTSD method, especially the
black-box TTSD method can also serve as the last line of
defense when someone adopts models with unknown cred-
ibility and has no authority to get access to the training data
or model parameters, this scenario exists widely in the pre-
2Some paper also call it online backdoor defense [30, 39].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16363
vailing machine-learning-as-a-service (MLaaS) [19, 35].
However, with the development of backdoor attacks,
the TTSD defense is facing great challenges. One of
the major problems is that different types of triggers have
been presented. Unlike the early backdoor attacks whose
triggers are universal [3, 14] for all the images and usu-
ally conspicuous to human observers, recent works intro-
duced sample-specific triggers [32] and even invisible trig-
gers [8, 21, 26, 33, 49], making it harder to apply pattern
statistics or identify out-liners in the image space. An-
other main problem is the hardship of accomplishing the
TTSD defense without extra knowledge such as supplemen-
tal data or model accessibility. On the other hand, exist-
ing TTSD methods require certain knowledge and assump-
tion. Such assumptions include that the trigger is a specific
type [12, 42], the defenders have white-box accessibility to
victim models, the predicted soft confidence score of each
class [5, 12] or extra clean data for statistical analysis [48],
limiting the practicality for real-world applications.
In this paper, we aim to design a TTSD defense free
from these limitations. Specifically, we concentrate on a
more practicable black-box hard-label backdoor setting [15]
where defenders can only get the final decision from the
black-box victim models. In addition, no extra data is acces-
sible and no assumption on trigger appearance is allowed.
This setting assumes the defenders’ ability as weak as pos-
sible and makes TTSD hard to achieve. To the best of our
knowledge, we are the first to focus on the effectiveness of
TTSD in this strict setting, and we believe it is desirable
to develop TTSD methods working on such a scenario be-
cause it is very relevant to the wide deployment of cloud AI
service [4, 11] and embedded AI devices [1].
Since the setting we mentioned above has restricted the
accessibility of victim models and the use of extra data, we
cannot analyze the information in feature space [30, 39] or
train a trigger sample detector [10, 48] like existing works.
Fortunately, we find that the backdoor-infected models will
present clearly different corruption robustness for trigger
samples influenced by different image corruptions, but have
relatively similar robustness throughout different image cor-
ruptions for clean samples, leaving the clue for trigger sam-
ple detection. We call these findings the anomalous cor-
ruption robustness consistency of backdoor-infected mod-
els and describe them at length in Sec. 3. It is not the first
time that image corruptions are discussed in backdoor at-
tacks and defenses [27, 28, 34]. However, previous works
fail to explore the correlations between robustness against
different corruptions, as discussed in Sec. 3.3.
Based on our findings above, we propose test-time
corruption robustness consistency evaluation (TeCo), a
novel test-time trigger sample detection method. At the in-
ference stage of backdoor-infected models, TeCo modifies
the input images by commonly used image corruptions [18]MethodBlack-box Access No Need of Trigger Aussmptions
Logits-based Decision-based Clean Data Universal Sample-specific Invisible
SentiNet [5] # # # # #
SCan [39] # # # # #
Beatrix [30] # # #
NEO3[42] # # #
STRIP [12] # # # #
FreqDetector [48] #
TeCo (Ours)
Table 1. The model’s accessibility, the use of clean data, and the
assumptions on backdoor triggers required by various TTSD meth-
ods. We detail on some most related defenses in Sec. 2. ” ” rep-
resents the TTSD method supports this condition.
with growing severity and estimates the robustness against
different types of corruptions from the hard-label outputs of
the models. Then, a deviation measurement method is ap-
plied to calculate how spread out the results of robustness
are. And TeCo makes the final judgment of whether the in-
put images are with triggers based on this metric. Extensive
experiments show that compared with the existing advanced
TTSD method, TeCo improves AUROC about 10%, has a
higher F1-score of 14%, and achieves 5times of stability
against different types of trigger.
Finally, we take a deep investigation into our observa-
tions by constructing adaptive attacks against TeCo. From
the results of feature space visualization and quantification
of adaptive attacks, we speculate that the anomalous behav-
ior of corruption robustness consistency derives from the
widely-used dual-target training in backdoor attacks and it
is hard to be avoided by existing trigger types. We hope
these findings can shed light on a new perspective of back-
door attacks and defenses for the community. In summary,
we make the following contributions:
• We propose TeCo, a novel test-time trigger sample de-
tection method that only requires the hard-label out-
puts of the victim models and without extra data or
assumptions about trigger types.
• We discover the fact of anomalous corruption robust-
ness consistency, i.e., the backdoor-infected models
have similar performance across different image cor-
ruptions for clean images, but not for the trigger sam-
ples.
• We evaluate TeCo on five datasets, four model archi-
tectures (including CNNs and ViTs), and seven back-
door attacks with diverse trigger types. All experimen-
tal results support that TeCo outperforms state-of-the-
art methods.
• We further analyze our observations by constructing
adaptive attacks against TeCo. Experiments show that
the widely-used dual-target training in backdoor at-
tacks leads to anomalous corruption robustness consis-
tency and it is hard to be avoided by existing backdoor
triggers.
3NEO assumes the backdoor trigger is localized [14] thus will be in-
valid on distributed or global triggers [3, 8, 32, 48], including universal,
sample-sepcific, and invisible ones.
16364
|
Li_Long_Range_Pooling_for_3D_Large-Scale_Scene_Understanding_CVPR_2023
|
Abstract
Inspired by the success of recent vision transformers
and large kernel design in convolutional neural networks
(CNNs), in this paper, we analyze and explore essential
reasons for their success. We claim two factors that are
critical for 3D large-scale scene understanding: a larger
receptive field andoperations with greater non-linearity .
The former is responsible for providing long range con-
texts and the latter can enhance the capacity of the net-
work. To achieve the above properties, we propose a simple
yet effective long range pooling (LRP) module using dila-
tion max pooling, which provides a network with a large
adaptive receptive field. LRP has few parameters, and can
be readily added to current CNNs. Also, based on LRP ,
we present an entire network architecture, LRPNet, for 3D
understanding. Ablation studies are presented to support
our claims, and show that the LRP module achieves bet-
ter results than large kernel convolution yet with reduced
computation, due to its non-linearity. We also demonstrate
the superiority of LRPNet on various benchmarks: LRPNet
performs the best on ScanNet and surpasses other CNN-
based methods on S3DIS and Matterport3D. Code will be
avalible at https://github.com/li-xl/LRPNet.
|
1. Introduction
With the rapid development of 3D sensors, more and
more 3D data is becoming available from a variety of ap-
plications such as autonomous driving, robotics, and aug-
mented/virtual reality. Analyzing and understanding this
3D data has become essential, which requires efficient and
effective processing. Various data structures, including
point clouds, meshes, multi-view images, voxels, etc, have
been proposed for representing 3D data [73]. Unlike 2D
image data, a point cloud or mesh, is irregular and un-
ordered. These characteristics mean that typical CNNs can-
not directly be applied to such 3D data. Thus, specially
designed networks such as MLPs [49], CNNs [27, 40, 69],
GNNs [53, 68] and transformers [16, 46, 79] are proposed
unclassified wall floor cabinet bed chair sofa table door
window bookshelf picture counter desk curtain refrigerator shower curtaintoilet sink
bathtub other furniture
Input / Ground Truth Baseline OursFigure 1. Qualitative results of Effective Receptive Field
(ERF) [45]. Left: the input and ground truth. Middle : the ERF
and results of baseline described in section 3.3. Right : the ERF
and results of LRPNet (ours). The stained areas around the posi-
tions of interest (red dots) represent the range of receptive fields,
with green to red representing the increasing strength of response.
Our method correctly segments the cabinet from the wall with the
proposed effective receptive field.
to perform effective deep learning on them. However, pro-
cessing large-scale point cloud or mesh is computationally
expensive. Multi-view images contain multiple different
views of a scene. Typically, CNNs are used to process each
view independently and a fusion module is used to combine
the results. Nevertheless, there is unavoidable 3D informa-
tion loss due to the finite or insufficient number of views.
V oxel data has the benefit of regularity like images, even if
the on-surface voxels only contain sparse data. The simplic-
ity of the structure helps to maintain high performance, so
this paper focuses on processing voxel data.
Learning on 3D voxels can be easily implemented by
directly extending the well studied 2D CNN networks to
3D [52, 66, 72]. Considering that 3D voxel data is inher-
ently sparse, some works usually adopt specially designed
3D sparse CNNs [7,15,56] for large-scale scene understand-
ing. Since only the surface of the real scene data has val-
ues, the neighbors around each voxel do not necessarily be-
long to the same object, so a large receptive field is more
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10300
conducive to feature extraction. Sparse convolution has ad-
vantages in modeling local structures, but it ignores long
range contexts, which are critical for 3D scene understand-
ing tasks. Transformers [16, 35, 70, 71] have proved useful
in processing 3D scenes, as they can capture the global re-
lationship with their larger receptive field and a better way
to interact. However, they usually have a quadratic compu-
tational complexity, which brings a heavy computing cost.
A straightforward way to incorporate the long range con-
texts into the learning of 3D voxel data is to exploit a large
kernel 3D convolution. However, the number of network
parameters and the amount of computation would increase
cubically due to the additional dimension compared to 2D
images. Besides, current ways of feature interaction or ag-
gregation, such as average pooling and convolution, usually
adopt a linear combination of features of all the locations
in the receptive field. This works for 2D images since all
the locations in the receptive field have valid values, which,
however, does not hold for 3D voxels due to the sparsity na-
ture of 3D scenes. Directly applying such linear interaction
or aggregation on the voxel that has few neighbors in the re-
ceptive field would make the feature of that voxel too small
or over-smoothed and thus less informative.
Taking the above into consideration, and to achieve a
trade-off between the quality of results and computation,
we propose long range pooling (LRP), a simple yet effec-
tive module for 3D scene segmentation. Compared with
previous sparse convolution networks [7, 15], LRP is capa-
ble of increasing the effective receptive field with a negli-
gible amount of computational overhead. Specifically, we
achieve a large receptive field by proposing a novel dila-
tion max pooling to enhance the non-linearity of the neural
network. We further add a receptive field selection mod-
ule, so that each voxel can choose a suitable receptive field,
which is adaptive to the distribution of voxels. The above
two components comprise the LRP module. Unlike dilation
convolution, which is often used to enlarge the receptive
field for 2D images [18,19], our method can achieve a large
receptive field with fewer parameters and computation by
using dilation max pooling. Furthermore, LRP is a simple,
and efficient module that can be readily incorporated into
other networks. We construct a more capable neural net-
work, LRPNet , by adding LRP at the end of each stage of
the sparse convolution network, introduced by VMNet [29].
Experimental results show that LRPNet achieves a
significant improvement in 3D segmentation accuracy
on large-scale scene datasets, including ScanNet [8],
S3DIS [1] and Matterport3D [4]. Qualitative results are il-
lustrated in Figure 1, which shows the improvement of a
larger receptive field. We also experimentally compare the
effects of the different receptive fields by reducing the num-
ber of dilation max pooling of LRP. Moreover, we explore
the influence of non-linearity of LRP module by replacingmax pooling with average pooling or convolution. Ablation
results show that a larger receptive field and operations with
greater non-linearity for feature aggregation will improve
the segmentation accuracy.
Our contributions are thus:
• a simple and effective module, the long range pooling
(LRP) module, which provides a network with a large
adaptive receptive field without a large number of pa-
rameters,
• a demonstration that a larger receptive field and aggre-
gation operations with greater non-linearity enhance
the capacity of a sparse convolution network, and
• a simple sparse convolution network using the LRP
module, which achieves superior 3D segmentation re-
sults on various large-scale 3D scene benchmarks.
|
Karaev_DynamicStereo_Consistent_Dynamic_Depth_From_Stereo_Videos_CVPR_2023
|
Abstract
We consider the problem of reconstructing a dynamic
scene observed from a stereo camera. Most existing meth-
ods for depth from stereo treat different stereo frames in-
dependently, leading to temporally inconsistent depth pre-
dictions. Temporal consistency is especially important for
immersive AR or VR scenarios, where flickering greatly di-
minishes the user experience. We propose DynamicStereo,
a novel transformer-based architecture to estimate dispar-
ity for stereo videos. The network learns to pool informa-
tion from neighboring frames to improve the temporal con-
sistency of its predictions. Our architecture is designed to
process stereo videos efficiently through divided attention
layers. We also introduce Dynamic Replica, a new bench-
mark dataset containing synthetic videos of people and ani-
mals in scanned environments, which provides complemen-
tary training and evaluation data for dynamic stereo closer
to real applications than existing datasets. Training with
this dataset further improves the quality of predictions of
our proposed DynamicStereo as well as prior methods. Fi-
nally, it acts as a benchmark for consistent stereo methods.
Project page: https://dynamic-stereo.github.io/
|
1. Introduction
Estimating depth from stereo is a fundamental computer
vision problem, with applications in 3D reconstruction,
robot navigation, and human motion capture, among oth-
ers. With the advent of consumer devices featuring multiple
cameras, such as AR glasses and smartphones, stereo can
simplify the 3D reconstruction of everyday scenes, extract-
ing them as content to be experienced in virtual or mixed
reality, or for mixed reality pass-through.
Depth from stereo takes as input two images capturing
the same scene from different viewpoints. It then finds pairs
of matching points, a problem known as disparity estima-
tion. Since the two cameras are calibrated, the matched
points can be projected into 3D using triangulation. While
this process is robust, it is suboptimal when applied to video
data, as it can only reconstruct stereo frames individually,
ignoring the fact that the observations infer properties of
thesame underlying objects over time. Even if the camera
moves or the scene deforms non-rigidly, the instantaneous
3D reconstructions are highly correlated and disregarding
this fact can result in inconsistencies.
In this paper, we thus consider the problem of dynamic
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13229
depth from stereo to improve the temporal consistency of
stereo reconstruction from video data.
Traditional approaches to stereo compute the matching
costs between local image patches, aggregating those in an
objective function, and optimizing the latter together with
a regularization term to infer disparities. Examples of such
approaches include max-flow [35] and graph-cut [15]. More
recently, stereo methods have used deep networks learned
from a large number of image pairs annotated with ground-
truth disparities [12, 14, 18, 23]. They usually follow an ap-
proach similar to the traditional methods, but using deep
CNN features for computing the matching costs, and replac-
ing the per-image optimization by a pre-trained regression
deep network, which processes the cost volume and outputs
the estimated disparities.
In the video setting, matching quality can potentially be
improved by looking for matches across space and time.
For instance, points occluded in one camera at a given point
in time may be visible from both cameras at other times.
Transformer architectures have shown that attention can
be a powerful and flexible method for pooling information
over a range of contexts [6, 8, 9, 42]. Our DynamicStereo
model incorporates self- and cross-attention to extract rel-
evant information across space, time and stereo pairs. Our
architecture relies on divided attention [3] to allow efficient
processing of this high-dimensional space.
As a learning-based approach, we wish to learn priors
from data representative of real-life 3D dynamic reconstruc-
tion applications, where videos depict people or animals
moving and interacting with objects. There are several syn-
thetic video datasets [10,27,41] commonly used for training
stereo and optical flow methods, but they contain abstract
scenes with several layers of moving objects that share lit-
tle resemblance to real-life. More realistic stereo datasets
also exist [45, 49], but they either do not contain video se-
quences or are focused on static scenes. Given these lim-
itations, as an additional contribution we propose a new
synthetic stereo dataset showing moving human and animal
characters inside realistic physical spaces from the Replica
dataset [40]. We call this new dataset Dynamic Replica
(DR), and we use it for learning dynamic stereo matches.
DR contains 524 videos of virtual humans and animals em-
bedded in realistic digital scans of physical environments
(see Tab. 1 and Fig. 2). We show that DR can significantly
boost the quality of dynamic stereo methods compared to
training them only on existing depth-from-stereo datasets.
To summarise, we make three contributions . (1) We
introduce DynamicStereo , a transformer-based architecture
that improves dynamic depth from stereo by jointly process-
ing stereo videos. (2) We release Dynamic Replica , a new
benchmark dataset for learning and evaluating models for
dynamic depth from stereo. (3) We demonstrate state-of-
the-art dynamic stereo results in a variety of benchmarks.
<latexit sha1_base64="KNRfIEFRzYdKpJERe1Us/CEKo78=">AAACDnicbVDLSsNAFJ3UV42vqEs3wVJwVZIu1GVBEJcV7QOaUCaTm3boZBJmJkoJ/QI3/oobF4q4de3Ov3HaRtDWAwOHc+7hzj1ByqhUjvNllFZW19Y3ypvm1vbO7p61f9CWSSYItEjCEtENsARGObQUVQy6qQAcBww6wehi6nfuQEia8Fs1TsGP8YDTiBKstNS3ql4AA8pzAlyBmJg3msAlS+5ND3j4I/etilNzZrCXiVuQCirQ7FufXpiQLNZxwrCUPddJlZ9joShhMDG9TEKKyQgPoKcpxzFIP5+dM7GrWgntKBH6cWXP1N+JHMdSjuNAT8ZYDeWiNxX/83qZis79nPI0U8DJfFGUMVsl9rQbO6QCiGJjTTARVP/VJkMsMNEdSFOX4C6evEza9Zp7Wqtf1ysNp6ijjI7QMTpBLjpDDXSFmqiFCHpAT+gFvRqPxrPxZrzPR0tGkTlEf2B8fAPaZJyQ</latexit>SceneFlow<latexit sha1_base64="7qIFaxvK4LuBPD1QVPsj4V9Yi3Q=">AAACCnicbVDLSsNAFJ34rPEVdekmWgRXJelCXRbcWNxU6AuaUCaTm3boZBJmJkIJXbvxV9y4UMStX+DOv3HaRtDWAwOHc+7hzj1ByqhUjvNlrKyurW9slrbM7Z3dvX3r4LAtk0wQaJGEJaIbYAmMcmgpqhh0UwE4Dhh0gtH11O/cg5A04U01TsGP8YDTiBKstNS3TrwABpTnBLgCMTFv681m3fSAhz9S3yo7FWcGe5m4BSmjAo2+9emFCcliHScMS9lznVT5ORaKEgYT08skpJiM8AB6mnIcg/Tz2SkT+0wroR0lQj+u7Jn6O5HjWMpxHOjJGKuhXPSm4n9eL1PRlZ9TnmYKOJkvijJmq8Se9mKHVABRbKwJJoLqv9pkiAUmugNp6hLcxZOXSbtacS8q1btqueYUdZTQMTpF58hFl6iGblADtRBBD+gJvaBX49F4Nt6M9/noilFkjtAfGB/f/3SaZw==</latexit>KITTI<latexit sha1_base64="j35VC4958dpdGZRKXm8+SCNznU4=">AAACD3icbVDLSsNAFJ3UV42vqEs3waK4KkkX6rLgxo1QwT6gCWUyuWmHTiZhZiKE0D9w46+4caGIW7fu/BunbQRtPTBwOOce7twTpIxK5ThfRmVldW19o7ppbm3v7O5Z+wcdmWSCQJskLBG9AEtglENbUcWglwrAccCgG4yvpn73HoSkCb9TeQp+jIecRpRgpaWBdeoFMKS8IMAViIl5Q8OQQZCJ3PSAhz/6wKo5dWcGe5m4JamhEq2B9emFCcliHScMS9l3nVT5BRaKEgYT08skpJiM8RD6mnIcg/SL2T0T+0QroR0lQj+u7Jn6O1HgWMo8DvRkjNVILnpT8T+vn6no0i8oTzMFnMwXRRmzVWJPy7FDKoAolmuCiaD6rzYZYYGJ7kCaugR38eRl0mnU3fN647ZRazplHVV0hI7RGXLRBWqia9RCbUTQA3pCL+jVeDSejTfjfT5aMcrMIfoD4+Mb4yKdJQ==</latexit>Middlebury
<latexit sha1_base64="LtxFmhvGk8uV7G2OB/8vdRxuryU=">AAACC3icbVC7TsMwFHXKq4RXgJHFaoXEVCUdgLESC2MR9CE1UeU4t61Vx4lsB6mKurPwKywMIMTKD7DxN7htkKDlSJaPzrlH9j1hypnSrvtlldbWNza3ytv2zu7e/oFzeNRWSSYptGjCE9kNiQLOBLQ00xy6qQQShxw64fhq5nfuQSqWiDs9SSGIyVCwAaNEG6nvVPwQhkzkFIQGObVvmbm57YOIfrS+U3Vr7hx4lXgFqaICzb7z6UcJzWITp5wo1fPcVAc5kZpRDlPbzxSkhI7JEHqGChKDCvL5LlN8apQIDxJpjtB4rv5O5CRWahKHZjImeqSWvZn4n9fL9OAyyJlIMw2CLh4aZBzrBM+KwRGTQDWfGEKoZOavmI6IJNR0oGxTgre88ipp12veea1+U6823KKOMjpBFXSGPHSBGugaNVELUfSAntALerUerWfrzXpfjJasInOM/sD6+AacGptb</latexit>Sintel<latexit sha1_base64="kQcJUz5ET5nR/IzTuIrq85L7D7w=">AAACCnicbVDLSsNAFJ3UV42vqEs30SK4KkkFdVlQocsKfUETymRy2w6dTMLMRCihazf+ihsXirj1C9z5N07bCNp6YOBwzj3cuSdIGJXKcb6Mwsrq2vpGcdPc2t7Z3bP2D1oyTgWBJolZLDoBlsAoh6aiikEnEYCjgEE7GF1P/fY9CElj3lDjBPwIDzjtU4KVlnrWsRfAgPKMAFcgJuZto3Z+Y3rAwx+pZ5WcsjODvUzcnJRQjnrP+vTCmKSRjhOGpey6TqL8DAtFCYOJ6aUSEkxGeABdTTmOQPrZ7JSJfaqV0O7HQj+u7Jn6O5HhSMpxFOjJCKuhXPSm4n9eN1X9Kz+jPEkVcDJf1E+ZrWJ72osdUgFEsbEmmAiq/2qTIRaY6A6kqUtwF09eJq1K2b0oV+4qpaqT11FER+gEnSEXXaIqqqE6aiKCHtATekGvxqPxbLwZ7/PRgpFnDtEfGB/fuMeaOg==</latexit>ETH3D<latexit sha1_base64="kToB24ypaWuVh60MLp7xs3LaqB0=">AAACJHicbVDLSsNAFJ3UV42vqEs3g0Wom5J0oYKbgi5cVrEPaEuZTG/aoZNJmJmIJfRj3Pgrblz4wIUbv8VpG0FbD1w4nHMv997jx5wp7bqfVm5peWV1Lb9ub2xube84u3t1FSWSQo1GPJJNnyjgTEBNM82hGUsgoc+h4Q8vJn7jDqRikbjVoxg6IekLFjBKtJG6znnbhz4TKQWhQY7ttoZ77Qfp5UiQkFF8A+YKSsa4aDaqY7sNovfT3HUKbsmdAi8SLyMFlKHadd7avYgmoRmnnCjV8txYd1IiNaMczPJEQUzokPShZai5AFQnnT45xkdG6eEgkqaExlP190RKQqVGoW86Q6IHat6biP95rUQHZ52UiTjRIOhsUZBwrCM8SQz3mASq+cgQQiUzt2I6IJJQk4GyTQje/MuLpF4ueSel8nW5UHGzOPLoAB2iIvLQKaqgK1RFNUTRA3pCL+jVerSerXfrY9aas7KZffQH1tc3IAOlpg==</latexit>Dynamic Replica(ours)Figure 2. Example frames from depth-from-stereo datasets.
We visually compare current datasets to Dynamic Replica , which
contains renderings of every-day scenes with people and animals
and differs from existing datasets in size, realism, and content.
|
Liu_3D_Line_Mapping_Revisited_CVPR_2023
|
Abstract
In contrast to sparse keypoints, a handful of line segments
can concisely encode the high-level scene layout, as they
often delineate the main structural elements. In addition to
offering strong geometric cues, they are also omnipresent in
urban landscapes and indoor scenes. Despite their appar-
ent advantages, current line-based reconstruction methods
are far behind their point-based counterparts. In this paper
we aim to close the gap by introducing LIMAP , a library
for 3D line mapping that robustly and efficiently creates
3D line maps from multi-view imagery. This is achieved
through revisiting the degeneracy problem of line triangu-
lation, carefully crafted scoring and track building, and
exploiting structural priors such as line coincidence, paral-
lelism, and orthogonality. Our code integrates seamlessly
with existing point-based Structure-from-Motion methods
and can leverage their 3D points to further improve the line
reconstruction. Furthermore, as a byproduct, the method
is able to recover 3D association graphs between lines and
points / vanishing points (VPs). In thorough experiments,
we show that LIMAP significantly outperforms existing ap-
proaches for 3D line mapping. Our robust 3D line maps
also open up new research directions. We show two exam-
ple applications: visual localization and bundle adjustment,
where integrating lines alongside points yields the best re-
sults. Code is available at https://github.com/cvg/limap.
|
1. Introduction
The ability to estimate 3D geometry and build sparse
maps via Structure-from-Motion (SfM) has become ubiq-
uitous in 3D computer vision. These frameworks enable
important tasks such as building maps for localization [60],
providing initial estimates for dense reconstruction and re-
finement [65], and novel view synthesis [45, 48]. Currently,
the field is dominated by point-based methods in which 2D
keypoints are detected, matched, and triangulated into 3D
maps [20, 64]. These sparse maps offer a compact scene rep-
resentation, only reconstructing the most distinctive points.
While there have been tremendous progress in point-
based reconstruction methods, they still struggle in scenes
(a) Point mapping [13, 64] (b) Line mapping
(c) Line-point association (d) Line-VP association
Figure 1. In this paper, we propose a robust pipeline for mapping
3D lines (b), which offers stronger geometric clues about the scene
layout compared to the widely used point mapping (a). Part of
the success of our pipeline attributes to the modeling of structural
priors such as coincidence (c), and parallelism / orthogonality (d).
The corresponding 3D association graphs between lines and points
/ vanishing points (VPs) are also recovered from our system as a
byproduct. The degree-1 point and degree-2 junctions are colored
in blue and red respectively in (c), while parallel lines associated
with the same VP are colored the same in (d).
where it is difficult to detect and match sufficiently many sta-
ble keypoints, such as in indoor areas. On the contrary, these
man-made scenes contain abundant lines, e.g. in walls, win-
dows, doors, or ceilings. Furthermore, lines exhibit higher
localization accuracy with less uncertainty in pixels [16].
Last but not least, lines appear in highly structured patterns,
often satisfying scene-wide geometric constraints such as
co-planarity, coincidence (line intersections), parallelism,
and orthogonality. In practice, lines suffer from different is-
sues, such as poor endpoint localization and partial occlusion.
However, recent line detectors and matchers are bridging the
gap of performance between points and lines [25, 46, 84],
making it timely to revisit the line reconstruction problem.
Despite their rich geometric properties and abundance in
the real world, there exist very few line-based reconstruction
methods in the literature [22, 23, 44, 77]. In practical applica-
tions, they have also not achieved the same level of success
as their point-based counterparts. We believe this is due to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21445
several intrinsic challenges specific to line mapping:
•Inconsistent endpoints. Due to partial occlusion, lines
often have inconsistent endpoints across images.
•Line fragmentation. In each image there might be mul-
tiple line segments that belong to the same line in 3D.
This makes the process of creating track associations more
complex compared to building 3D point tracks.
•No two-view geometric verification. While point
matches can be verified in two views via epipolar geome-
try, lines require at least three views to filter.
•Degenerate configurations. In practice line triangula-
tion is more prone to unstable configurations (see Fig. 8),
e.g. becoming degenerate whenever the line is parallel
with the camera motion (i.e. to epipolar lines).
•Weaker descriptor-based matching. State-of-the-art de-
scriptors for line segments are far behind their point-based
counterparts, putting more emphasis on geometric verifi-
cation and filtering during reconstruction.
In this paper we aim to reduce the gap between point-
based and line-based mapping solutions. We propose a new
robust mapping method, LIMAP, that integrates seamlessly
into existing open-source point-based SfM frameworks [64,
67, 80]. By sharing the code with the research community
we hope to enable more research related to lines; both for
low-level tasks (such as improving line segment detection
and description) and for integrating lines into higher-level
tasks (such as visual localization or dense reconstruction). In
particular, we make the following contributions in the paper:
•We build a new line mapping system that reliably recon-
structs 3D line segments from multi-view RGB images .
Compared to previous approaches, our line maps are sig-
nificantly more complete and accurate, while having more
robust 2D-3D track associations.
•We achieve this by automatically identifying and ex-
ploiting structural priors such as coincidence (junctions)
and parallelism. Our technical contribution spans all
stages of line mapping including triangulating proposals,
scoring, track building, and joint optimization, with 3D
line-point / VP association graphs output as a byproduct.
•The framework is flexible such that researchers can easily
change components (e.g. detectors, matchers, vanishing
point estimators, etc.) or integrate additional sensor data
(e.g. depth maps or other 3D information).
•We are the first to go beyond small test sets by quanti-
tatively evaluating on both synthetic and real datasets to
benchmark the performance, with hundreds of images for
each scene, in which LIMAP consistently and signifi-
cantly outperforms existing approaches .
•Finally, we demonstrate the usefulness of having robust
line maps by showing improvement over purely point-
based methods in tasks such as visual localization and
bundle adjustment in Structure-from-Motion.
|
Liu_Pose-Disentangled_Contrastive_Learning_for_Self-Supervised_Facial_Representation_CVPR_2023
|
Abstract
Self-supervised facial representation has recently at-
tracted increasing attention due to its ability to perform
face understanding without relying on large-scale anno-
tated datasets heavily. However, analytically, current
contrastive-based self-supervised learning (SSL) still per-
forms unsatisfactorily for learning facial representation.
More specifically, existing contrastive learning (CL) tends
to learn pose-invariant features that cannot depict the pose
details of faces, compromising the learning performance.
To conquer the above limitation of CL, we propose a novel
Pose-disentangled Contrastive Learning (PCL) method for
general self-supervised facial representation. Our PCL first
devises a pose-disentangled decoder (PDD) with a deli-
cately designed orthogonalizing regulation, which disen-
tangles the pose-related features from the face-aware fea-
tures; therefore, pose-related and other pose-unrelated fa-
cial information could be performed in individual subnet-
works and do not affect each other’s training. Furthermore,
we introduce a pose-related contrastive learning scheme
that learns pose-related information based on data augmen-
tation of the same image, which would deliver more effective
face-aware representation for various downstream tasks.
We conducted linear evaluation on four challenging down-
stream facial understanding tasks, i.e., facial expression
recognition, face recognition, AU detection and head pose
estimation. Experimental results demonstrate that PCL sig-
nificantly outperforms cutting-edge SSL methods. Our Code
is available at https://github.com/DreamMr/PCL.
|
1. Introduction
Human face perception and understanding is an impor-
tant and long-lasting topic in computer vision. By analyzing
*Equally-contributed first authors
†Corresponding author
CL LossPositive
Disentangle
Negative
CLLossPositive Negative
pose-related CL LossPositive Negative
(a) Motivation ofSimCLR and ourPCL
Same face Different faces Same pose Different posesSimCLR
PCL (ours )
(b) Results of SimCLR and our PCLAccuracy (%)
SimCLR
PCL(ours)
74
717579
7376
FER (RAF- DB) Identity (LFW) AU (DISFA)Figure 1. The motivation of our method. Affected by differ-
ent poses, the popular CL method, e.g., SimCLR, treats pose and
other face information uniformly, resulting in sub-optimal results.
To alleviate this limitation for CL, our PCL attempts to disentan-
gle the learning on pose-related features and pose-unrelated facial
features, thus achieving more effective self-supervised facial rep-
resentation learning for downstream facial tasks.
faces, we can obtain various kinds of information, including
identities, emotions, and gestures. Recently, deep convolu-
tional neural networks (DCNNs) [20, 30, 62] have achieved
promising facial understanding results, but they require a
large amount of annotated data for model training. Since
labeling face data is generally a labor- and time-costly pro-
cess [61], it becomes important to enable DCNN models to
learn from unlabelled face images, which are much easier
to collect. Accordingly, researchers have introduced self-
supervised learning (SSL) schemes to achieve better learn-
ing performance on unlabeled facial data.
To achieve effective SSL performance, contrastive learn-
ing (CL) based strategy is widely applied in the community
[6,26,43]. In general, a CL-based method pulls two features
representing similar samples closer to each other and pushes
those of diverse samples far away from each other [56],
thus facilitating the DCNNs to learn various visual patterns
without annotations. Generally, without supervision, simi-
lar/positive samples of CL are obtained by augmenting the
same image, and the diverse/negative samples can refer to
different images. To learn from unlabelled face images, ex-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9717
isting CL-based methods [48,53,65] have achieved effective
self-supervised facial representation learning.
However, despite progress, we found that directly uti-
lizing CL-based methods still obtained sub-optimal perfor-
mance due to the facial poses. In particular, CL-based meth-
ods treat the augmented images from the same image as
positive samples. In such a manner, the learned features
are pose-invariant, which cannot recognize the variances of
facial poses. Nevertheless, poses are one significant con-
sideration for facial understanding [1, 51]; for example, a
person tends low their head when they feel sad.
To tackle the above limitation of CL, we propose a Pose-
disentangled Contrastive Learning (PCL) method, which
disentangles the learning on pose-related features and pose-
unrelated facial features for CL-based self-supervised facial
representation learning. Fig. 1 has shown an intuitive exam-
ple of contrastive learning results. Specifically, Our method
introduces two novel modules, i.e., a pose-disentangled de-
coder (PDD) and a pose-related contrastive learning scheme
(see Fig. 2). In the PDD, we first obtain the face-aware
features from a backbone, such as ResNet [10, 27], Trans-
former [11, 16–18, 40], and then disentangle pose-related
features and pose-unrelated facial features from the face-
aware features using two different subnets through facial
reconstruction. In facial reconstruction, the combination of
one pose-unrelated facial feature and one pose-related fea-
ture can reconstruct an image with the same content as the
pose-unrelated facial feature and the same pose as the pose-
related feature. Furthermore, an orthogonalizing regulation
is designed to make the pose-related and pose-unrelated fea-
tures more independent.
In the pose-related contrastive learning, instead of learn-
ing pose-invariant features by normal CL, we introduce
two types of data augmentation for one face image, one
containing pose augmentation and another only containing
pose-unrelated augmentation. Therefore, image pairs gen-
erated by using pose augmentation contain different poses
and serve as negative pairs, whereas image pairs generated
from pose-unrelated augmentation contain the same pose as
the original image and are treated as positive pairs. The
pose-related CL is conducted to learn pose-related features,
and face CL is used to learn pose-unrelated facial features.
Therefore, our proposed pose-related CL can learn detailed
pose information without disturbing the learning of pose-
unrelated facial features in the images.
In general, the major contributions of this paper are sum-
marized as follows:
1. We propose a novel pose-disentangled contrastive
learning framework, termed PCL, for learning unla-
beled facial data. Our method introduces an effec-
tive mechanism that could disentangle pose features
from facial features and enhance contrastive learning
for pose-related facial representation learning.2. We introduce a PDD using facial image reconstruction
with a delicately designed orthogonalizing regulation
to help effectively identify and separate the face-aware
features obtained from the backbone into pose-related
and pose-unrelated facial features. The PDD is easy-
to-implement and efficient for head pose extraction.
3. We further propose a pose-related contrastive learn-
ing scheme for pose-related feature learning. Together
with face contrastive learning on pose-unrelated facial
features, we make both learning schemes cooperate
with each other adaptively and obtain more effective
learning performance on the face-aware features.
4. Our PCL can be well generalized to several down-
stream tasks, e.g., facial expression recognition (FER),
facial AU detection, facial recognition, and head pose
estimation. Extensive experiments show the superior-
ity of our PCL over existing SSL methods, accessing
state-of-the-art performance on self-supervised facial
representation learning.
|
Liu_Learning_Orthogonal_Prototypes_for_Generalized_Few-Shot_Semantic_Segmentation_CVPR_2023
|
Abstract
Generalized few-shot semantic segmentation (GFSS)
distinguishes pixels of base and novel classes from the back-
ground simultaneously, conditioning on sufficient data of
base classes and a few examples from novel class. A typical
GFSS approach has two training phases: base class learn-
ing and novel class updating. Nevertheless, such a stand-
alone updating process often compromises the well-learnt
features and results in performance drop on base classes.
In this paper, we propose a new idea of leveraging Projec-
tion onto Orthogonal Prototypes (POP), which updates fea-
tures to identify novel classes without compromising base
classes. POP builds a set of orthogonal prototypes, each of
which represents a semantic class, and makes the prediction
for each class separately based on the features projected
onto its prototype. Technically, POP first learns prototypes
on base data, and then extends the prototype set to novel
classes. The orthogonal constraint of POP encourages the
orthogonality between the learnt prototypes and thus miti-
gates the influence on base class features when generalizing
to novel prototypes. Moreover, we capitalize on the residual
of feature projection as the background representation to
dynamically fit semantic shifting (i.e., background no longer
includes the pixels of novel classes in updating phase). Ex-
tensive experiments on two benchmarks demonstrate that
our POP achieves superior performances on novel classes
without sacrificing much accuracy on base classes. No-
tably, POP outperforms the state-of-the-art fine-tuning by
3.93% overall mIoU on PASCAL-5iin 5-shot scenario.
|
1. Introduction
Semantic segmentation is to assign semantic labels to
every pixel of an image. With the recent development of
CNNs [10, 13] and vision transformers [6, 18, 23, 24, 35, 41,
42], the state-of-the-art networks have successfully pushed
the limits of semantic segmentation [1, 3, 25, 49] with re-
markable performance improvements. Such achievements
*Corresponding author.
+++Base class Novel class Background
(a)
(b)Orthogonal Prototypes…
Classifier
…………
ClassifierEncoder
Encoder++…
Classifier
………
Classifier
Phase 1
Projection
ProjectionPhase 2
Phase 1
Phase 2Updated in Phase 2
Figure 1. Comparisons between fine-tuning [26] and Projection
onto Orthogonal Prototypes (POP). In novel class updating phase,
fine-tuning (a) updates the network to predict base and novel
classes, and inevitably compromises the well-learnt representa-
tions for base classes. Instead, POP (b) only updates prototypes
for novel classes and executes predictions for each class separately
on the features projected onto different orthogonal prototypes.
heavily rely on the requirements of large quantities of pixel-
level annotations and it is also difficult to directly apply the
models to the classes unseen in the training set. A straight-
forward way to alleviate this issue is to leverage Few-shot
Semantic Segmentation (FSS) [30, 36, 44], which utilizes
the limited support annotations from unseen/novel classes
to adapt the models. Nevertheless, FSS performs on the as-
sumption that the support images and the query image con-
tain the same novel classes, and solely emphasizes the seg-
mentation of one novel class in the query image at a time. A
more practical scenario namely Generalized Few-shot Se-
mantic Segmentation (GFSS) [33] is recently presented to
simultaneously identify the pixels of base and novel classes
in a query image.
A typical GFSS solution has proceeded along two train-
ing phases: base class learning and novel class updating. In
the first phase, models are trained on abundant base classes’
annotations to classify the pixels of base categories, and
then updated with the limited labeled novel examples in
the second phase to additionally recognize pixels of novel
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11319
classes. For instance, Myers Dean et al. [26] sample some
base data plus novel examples as the supervision to fine-
tune the network, as depicted in Figure 1(a). Despite having
good performances on novel classes, there still exists a clear
performance degradation on base classes. We speculate that
this may be the results of compromising the well-learnt fea-
tures of base classes in fine-tuning. As such, a valid ques-
tion then emerges as is it possible for GFSS to nicely gener-
alize the model to novel classes without sacrificing much
segmentation capability on base classes. In an effort to
answer this question, we seek to represent an image via a
group of uncorrelated feature components each of which
characterizes a specific class. By doing so, it is readily ap-
plicable to learn and integrate new components for novel
classes without affecting the ones learnt for base classes.
To materialize our idea, we propose a new Projection
onto Orthogonal Prototypes (POP) framework for GFSS.
POP learns a series of orthogonal prototypes and each pro-
totype corresponds to one specific semantic class. As shown
in Figure 1(b), POP employs an encoder to extract feature
maps of a given query image and then projects them onto
prototypes. The projection on each prototype is regarded
as the discriminative representation with respect to the cor-
responding class and exploited to predict the probability
map of pixels belonging to the class. More specifically,
POP deliberately devises the learning of prototypes from
three standpoints. The first one is to freeze the base pro-
totypes when learning novel ones from support images in
the updating phase. In this way, feature projections on base
prototypes maintain their discriminability of base classes.
Second, POP encourages the prototypes of base and novel
classes to be orthogonal through a prototype orthogonality
loss. Such a constraint decorrelates features projected onto
different prototypes and mitigates the inter-class confusion
caused by extending to novel classes. Finally, in view that
background no longer contains the pixels of novel classes
in updating phase, known as “semantic shifting”, POP mea-
sures the residual of feature projection as the background
representation instead of learning a prototype for “back-
ground”. This way further improves the differentiation be-
tween novel classes and background.
The main contribution of this work is the proposal of a
Projection on Orthogonal Prototypes (POP) framework for
generalized few-shot semantic segmentation. The solution
also leads to the elegant views of how to adapt the trained
model to novel classes without sacrificing well-learnt fea-
tures, and how to represent background pixels dynamically
in the context of semantic shifting, which are problems not
yet fully explored in the literature. We demonstrate that
POP outperforms the state-of-the-art fine-tuning [26] on
two benchmarks (PASCAL-5iand COCO-20i) with evident
improvements on both base and novel classes.
|
Liu_NoisyQuant_Noisy_Bias-Enhanced_Post-Training_Activation_Quantization_for_Vision_Transformers_CVPR_2023
|
Abstract
The complicated architecture and high training cost of vi-
sion transformers urge the exploration of post-training quan-
tization. However, the heavy-tailed distribution of vision
transformer activations hinders the effectiveness of previ-
ous post-training quantization methods, even with advanced
quantizer designs. Instead of tuning the quantizer to bet-
ter fit the complicated activation distribution, this paper
proposes NoisyQuant, a quantizer-agnostic enhancement
for the post-training activation quantization performance
of vision transformers. We make a surprising theoretical
discovery that for a given quantizer, adding a fixed Uniform
noisy bias to the values being quantized can significantly
reduce the quantization error under provable conditions.
Building on the theoretical insight, NoisyQuant achieves the
first success on actively altering the heavy-tailed activation
distribution with additive noisy bias to fit a given quantizer.
Extensive experiments show NoisyQuant largely improves
the post-training quantization performance of vision trans-
former with minimal computation overhead. For instance,
on linear uniform 6-bit activation quantization, NoisyQuant
improves SOTA top-1 accuracy on ImageNet by up to 1.7%,
1.1% and 0.5% for ViT, DeiT, and Swin Transformer respec-
tively, achieving on-par or even higher performance than
previous nonlinear, mixed-precision quantization.
|
1. Introduction
Inspired by the success of Self Attention (SA)-based
transformer models in Natural Language Processing (NLP)
tasks [31], recent researches make significant progress in
applying transformer models to the field of computer vi-
sion [3, 10, 22, 30]. In the meantime, the typical design of
* Equal contribution.
Corresponding Author.transformer models induces large model sizes, high compu-
tational consumption, and long training time. For instance,
the widely used DeiT-Base model [30] contains 86M param-
eters, logs 18G floating-point operations for a single input,
and requires 300 epochs of training on the ImageNet dataset.
This leads to significant difficulties in hardware deployment.
In facing such difficulty, a number of compression and ac-
celeration methods are applied to vision transformer models,
including pruning [6, 39], quantization [24, 42], and neural
architecture search [5], etc.
Among these methods, quantization appears as one of
the most effective and widely applied ways [11]. Quan-
tization process uses a predefined “quantizer” function to
convert the continuous representation of weights and acti-
vations into a small number of discrete symbols, therefore
enabling low-precision representations for straightforward
memory savings. For DNN models, the approximation er-
ror made by the quantizer inevitably leads to performance
drop. A series of work focuses on Quantization-Aware Train-
ing (QAT) that finetunes the quantized model at low preci-
sion [8, 9, 25, 37, 47]. However, given the high training cost
and the complicated computation graph of vision transformer
models, retraining the model at low precision could be costly
and unstable [42]. Alternatively, Post-Training Quantization
(PTQ) is preferable for vision transformers as it eliminates
the need for re-training or finetuning the quantized model,
instead only adjusts the design of the quantizer based on the
full-precision pretrained model and a small set of sampled
calibration data [1, 2, 23, 32, 36]. For example, linear quan-
tizers [7, 29, 36, 38] reduce quantization error by shifting,
clipping, and scaling the values to be quantized. Nonlinear
quantizers [13, 42, 44] further adjust the width and location
of each quantization bin to better fit the distribution.
Unfortunately, though progresses are made in designing
better PTQ quantizers, it still appears to be significantly chal-
lenging to quantize vision transformer models, especially
for activation quantization. Transformer layers produce up
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20321
Figure 1. Overview of the NoisyQuant pipeline (orange box) with comparison to EasyQuant [36] (gray box). Histograms of input
activations, quantized activations, and layer outputs are illustrated in the pipeline. Starting from the input activation Xfollowing a GELU
function, where the green histogram is plotted in linear-scale and the gray in log-scale, NoisyQuant adds a fixed NoisyBias Nsampled
from a Uniform distribution onto X. Adding the noise before quantization flattens the peaks in the activation distribution to make it more
friendly to quantization, as visualized in the linear/log-scaled histogram of X+N, and theoretically proved in Sec. 3.2. The noise is
removed from the result of the fixed-point activation-weight multiplication with a denoising bias term, which we formulate in Sec. 3.3. The
rightmost sub-figures illustrate the layer output distribution of EasyQuant and NoisyQuant. Compared to the output of the floating-point
model (yellow), NoisyQuant output follows the distribution closely and achieves up to 60% output error reduction on some transformer
layers, as shown in Sec. 4. The resulting PTQ performance improvement of the entire model is provided in Sec. 5.
to millions of activation values with sophisticated distribu-
tions. For instance, outputs of the GELU function [15] are
asymmetrically distributed, with spikes in the histogram at
some values and a long tail in a large range (see top-left
of Fig. 1). Some linear projection layers also lead to signif-
icantly large activation values that are sparsely distributed
in a very long tail [23]. Consequently, low-precision PTQ
on vision transformer suffers from performance degradation,
even utilizing non-linear or mixed-precision quantizers at
the cost of additional data communication and computation
overheads [23, 43]. Nolinear uniform PTQ method achieves
good performance on vision transformer models.
This paper provides a brand new perspective in dealing
with the complicated vision transformer activation distribu-
tion. Instead of adding more tricks in the quantizer design to
fit the activation distribution, this work explores the potential
of actively and cost-effectively altering the distribution be-
ing quantized, making it more friendly to a given quantizer.
The pipeline of our proposed method is illustrated in Fig. 1.
Specifically, we make a surprising discovery that for any
quantizer, the quantization error can be significantly reduced
by adding a fixed noisy bias sampled from a Uniform distri-
bution to the activation before quantization. We theoretically
prove the condition when the quantization error reduction
can be achieved. On this basis, we propose NoisyQuant , a
plug-and-play , quantizer-agnostic enhancement on the post-
training activation quantization of vision transformer models.
For each layer, we sample a Noisy Bias based on the input
activation distribution following our theoretical analysis, andcompute the corresponding denoising bias to retain the cor-
rect output of the layer. At inference time, the Noisy Bias is
added to the input activation before quantization, and the de-
noising bias is removed from the layer output. This process
significantly reduces the quantization error with minimal
computation overhead. NoisyQuant leads to significant im-
provement in the PTQ performance of state-of-the-art vision
transformers. Applying NoisyQuant on top of a uniform
linear quantization achieves on-par performance to SOTA
mixed-precision, nonlinear PTQ methods [23, 42]. Adding
NoisyQuant on top of these nonlinear quantizers achieves
further performance gain.
To the best of our knowledge, this paper makes the fol-
lowing novel contributions:
•Theoretically shows the possibility and proves feasible
conditions for reducing the quantization error of heavy-
tailed distributions with a fixed additive noisy bias;
•Proposes NoisyQuant, a quantizer-agnostic enhance-
ment for PTQ performance on activation quantization.
NoisyQuant achieves the first success in actively refin-
ing the distribution being quantized to reduce quantiza-
tion error following the theoretical results on additive
noisy bias, with minimal computation overhead;
•Demonstrates consistent performance improvement by
applying NoisyQuant on top of existing PTQ quantiz-
ers. For 6-bit PTQ, NoisyQuant improves the ImageNet
top-1 accuracy of uniform linear quantized vision trans-
former models by up to 1.7%, and improves SOTA
20322
nonlinear quantizer PTQ4ViT [43] by up to 0.7%.
|
Liu_LEMaRT_Label-Efficient_Masked_Region_Transform_for_Image_Harmonization_CVPR_2023
|
Abstract
We present a simple yet effective self-supervised pre-
training method for image harmonization which can lever-
age large-scale unannotated image datasets. To achieve
this goal, we first generate pre-training data online with
ourLabel-Efficient MaskedRegion Transform (LEMaRT)
pipeline. Given an image, LEMaRT generates a foreground
mask and then applies a set of transformations to perturb
various visual attributes, e.g., defocus blur, contrast, satu-
ration, of the region specified by the generated mask. We
then pre-train image harmonization models by recovering
the original image from the perturbed image. Secondly, we
introduce an image harmonization model, namely SwinIH,
by retrofitting the Swin Transformer [27] with a combina-
tion of local and global self-attention mechanisms. Pre-
training SwinIH with LEMaRT results in a new state of
the art for image harmonization, while being label-efficient,
i.e., consuming less annotated data for fine-tuning than ex-
isting methods. Notably, on iHarmony4 dataset [8], SwinIH
outperforms the state of the art, i.e., SCS-Co [16] by a mar-
gin of 0.4dB when it is fine-tuned on only 50% of the train-
ing data, and by 1.0dB when it is trained on the full training
dataset.
|
1. Introduction
The goal of image harmonization is to synthesize photo-
realistic images by extracting and transferring foreground
regions from an image to another (background) image. The
main challenge is the appearance mismatch between the
foreground and the surrounding background, due to dif-
ferences in camera and lens settings, capturing conditions,
such as illumination, and post-capture image processing.
Image harmonization aims to resolve this mismatch by ad-
justing the appearance of the foreground in a composite im-
age to make it compatible with the background. Research
in image harmonization has relevant applications in photo-
realistic image editing and enhancement [42,44], video syn-
thesis [23, 37] and data augmentation for various computer
vision tasks [11, 12, 35].
composite
image (input)output
imagetransformed
image image
35.836.337.237.838.539.239.8
SCS-Co: 38.8
DHT+: 37.9
35363738394041
0% 20% 40% 60% 80% 100%PSNR (dB)
Percentage of training dataiS2AM: 38.2
LEMaRT (ours) SCS -Co DHT+ iS2AMFigure 1. Top: given an image, LEMaRT applies a set of transfor-
mations, e.g., brightness, hue adjustment, to obtain a transformed
image. The transformed image is then combined with the original
image to form a composite image, which is used to pre-train our
SwinIH image harmonization model. As shown in the right-hand
column, SwinIH is capable of reconstructing photo-realistic out-
put images after pre-training and fine-tuning. Bottom: using our
LEMaRT pre-training scheme, our image harmonization model
(SwinIH ) surpasses state of the art (SOTA) counterparts with less
than 40% of the training data from iHarmony4 for fine-tuning.
Traditional image harmonization approaches perform
color transforms to match the low-level color statistics of
the foreground to the background with the aim to achieve
photorealism [22, 31, 33, 39]. However, the generalization
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18290
ability of these methods is questionable because the eval-
uation was only conducted at a small scale, mainly using
human judgement. More recent works [8] have constructed
real image harmonization datasets with tens to thousands
of images to train learning-based methods. However, due
to the bottleneck of manual editing, these datasets do not
match the scale often required to train large-scale neural
networks. Rendered image datasets [3, 15] are more scal-
able but they suffer from the domain gap between synthetic
and real images. As a result, the performance of image har-
monization models is constrained by the limited size of a
few existing datasets [8, 20] on which they can be trained.
Inspired by the impressive performance leap achieved by
pre-trained models [17, 29] on various downstream tasks,
e.g., image classification, object detection, image caption-
ing, in this work, we introduce a novel self-supervised pre-
training method to boost the performance of image harmo-
nization models while being label-efficient, i.e., consum-
ing small amounts of fine-tuning data. The novelty of our
technique lies in the use of foreground masking strategies
and the perturbation of foreground visual attributes to self-
generate training data without annotations. Hence, we name
our pre-training method as Label- Efficient Masked Region
Transform (LEMaRT). In the first step, LEMaRT proposes
pseudo foreground regions in an image. Subsequently, it ap-
plies a set of transformations to perturb visual attributes of
the foreground, including contrast, sharpness, blur and satu-
ration. These transformations aim to mimic the appearance
discrepancy between the foreground and the background.
Using the transformed image, i.e., image with the perturbed
foreground, as the input, LEMaRT pre-trains image harmo-
nization models to reconstruct the original image, as shown
in the top half of Figure 1.
Subsequently, we design an image harmonization model
based on Swin Transformer [27], namely SwinIH, which is
short for Swin Image Harmonization. We build our model
upon Swin Transformer instead of the ViT model [10]
mainly due to the efficiency gain offered by its local shifted
window (Swin) attention. Similar to the design of the
original Swin Transformer, we keep the local self-attention
mechanism in all the Transformer blocks up except the last
one, where we employ global self-attention. We introduce
global self-attention into SwinIH to alleviate block bound-
ary artifacts produced by the Swin Transformer model when
it is directly trained for image harmonization.
We verify that LEMaRT consistently improves the per-
formance of models with a range of vision Transformer and
CNN architectures compared to training only on the target
dataset, e.g., iHarmony4. When we pre-train our SwinIH
model on MS-COCO dataset with LEMaRT and then fine-
tune it on iHarmony4 [8], it outperforms the state of the
art [16] by 0.4dB while using only 50%of the samples
from iHarmony4 for fine-tuning, and by 1.0dB when usingall the samples (see the plot in the bottom half of Figure 1).
Thekey contributions of our work are summarized below.
•We introduce Label-Efficient Masked Region Transform
(LEMaRT), a novel pre-training method for image harmo-
nization, which is able to leverage large-scale unannotated
image datasets.
•We design SwinIH, an image harmonization model based
on the Swin Transformer architecture [27].
•LEMaRT (SwinIH) establishes new state of the art
on iHarmony4 dataset, while consuming significantly less
amount of training data. LEMaRT also boosts the perfor-
mance of models with various network architectures.
|
Lee_Human_Pose_Estimation_in_Extremely_Low-Light_Conditions_CVPR_2023
|
Abstract
We study human pose estimation in extremely low-light
images. This task is challenging due to the difficulty of
collecting real low-light images with accurate labels, and
severely corrupted inputs that degrade prediction quality
significantly. To address the first issue, we develop a ded-
icated camera system and build a new dataset of real low-
light images with accurate pose labels. Thanks to our cam-
era system, each low-light image in our dataset is coupled
with an aligned well-lit image, which enables accurate pose
labeling and is used as privileged information during train-
ing. We also propose a new model and a new training strat-
egy that fully exploit the privileged information to learn rep-
resentation insensitive to lighting conditions. Our method
demonstrates outstanding performance on real extremely
low-light images, and extensive analyses validate that both
of our model and dataset contribute to the success.
|
1. Introduction
Deep neural networks [6, 55, 56, 64, 66] trained with
large-scale datasets [1, 18, 30, 35, 38] have driven dramatic
advances in human pose estimation recently. However, their
success demands high-quality inputs taken in controlled en-
vironments while in real-world applications images are of-
ten corrupted by low-light conditions, adverse weather con-
ditions, sensor noises, motion blur, etc. Indeed, a precon-
dition for human pose estimation in the wild is robustness
against such adverse conditions.
Motivated by this, we study pose estimation under ex-
∗Equal contribution.†Corresponding authors.tremely low-light conditions using a single sRGB image, in
which humans can barely see anything. The task is highly
practical as its solution enables nighttime applications of
pose estimation without raw-RGB data or additional de-
vices like IR cameras. It is at the same time challenging
due to the following two reasons. The first is the difficulty
of data collection. Manual annotation of human poses in
low-light images is often troublesome due to their limited
visibility. The second is the difficulty of pose estimation
on low-light images. The poor quality of low-light im-
ages in terms of visibility and signal-to-noise ratio largely
degrades prediction accuracy of common pose estimation
models. A na ¨ıve way to mitigate the second issue is to ap-
ply low-light image enhancement [5, 28, 41, 42, 62] to input
images. However, image enhancement is in general highly
expensive in both computation and memory. Also, it is not
aware of downstream recognition tasks and thus could be
sub-optimal for pose estimation in low-light conditions.
To tackle this challenging problem, we first present a
new dataset of real extremely low-light images with ground-
truth pose labels. The key feature of our dataset is that
each low-light image is coupled with a well-lit image of
the same content. The advantage of using the well-lit im-
ages is two-fold. First, they enable accurate labeling for
their low-light counterparts thanks to their substantially bet-
ter visibility. Second, they can be utilized as privileged in-
formation [13, 32, 40, 57, 58], i.e., additional input data that
are more informative than the original ones (low-light im-
ages in our case) but available only in training, to further
improve performance on low-light images. Such benefits
of paired training images have also been validated in other
robust recognition tasks [9, 34, 46–49]. The beauty of our
dataset is that pairs of low-light and well-lit images are all
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
704
realandaligned , unlike existing datasets that provide pairs
of synthetic-real images [9, 47, 48] or those of largely mis-
aligned real images [46, 49]. Since it is practically impossi-
ble to capture such paired images using common cameras,
we build a dedicated camera system for data collection.
We also propose an effective method based on learning
using privileged information (LUPI) [58] to fully exploit
our dataset. The proposed method considers a model tak-
ing low-light inputs as a student and a model dealing with
corresponding well-lit images as a teacher . Both of the
teacher and student are trained by a common pose estima-
tion loss, and the student further utilizes knowledge of the
teacher as additional supervision. Specifically, our method
employs neural styles of intermediate feature maps as the
knowledge and forces neural styles of low-light images to
approximate those of well-lit images by an additional loss.
As will be demonstrated, this LUPI approach allows the
learned representation to be insensitive to lighting condi-
tions. Moreover, we design a new network architecture that
unifies the teacher and student through lighting-condition
specific batch normalization (LSBN). LSBN consists of two
batch normalization (BN) layers, each of which serves im-
ages of each lighting condition, i.e., ‘well-lit’ or ‘low-light’.
We replace BNs of an existing network with LSBNs so that
images of different lighting conditions are processed by dif-
ferent BNs. Hence, in our architecture, the teacher and stu-
dent share all the parameters except for those of their corre-
sponding BNs, which allows the student to enjoy the strong
representation learned using well-lit images.
The efficacy of our method is evaluated on real low-light
images we collected for testing. Our method outperforms its
reduced versions and relevant approaches such as lighting-
condition adversarial learning and a combination of image
enhancement and pose estimation. These results clearly
demonstrate the advantages of our dataset and method. In
short, our major contribution is three-fold:
• We propose a novel approach to human pose estimation
in extremely low-light conditions using a single sRGB
image. To the best of our knowledge, we are the first to
tackle this challenging but highly practical problem.
• We build a new dataset that provides real and aligned
low-light and well-lit images with accurate pose labels.
• We present a strong baseline method that fully exploits
the low-light and well-lit image pairs of our dataset.
|
Li_Boosting_Low-Data_Instance_Segmentation_by_Unsupervised_Pre-Training_With_Saliency_Prompt_CVPR_2023
|
Abstract
Inspired by DETR variants, query-based end-to-end in-
stance segmentation (QEIS) methods have recently outper-
formed CNN-based models on large-scale datasets. Yet
they would lose efficacy when only a small amount of
training data is available since it’s hard for the crucial
queries/kernels to learn localization and shape priors. To
this end, this work offers a novel unsupervised pre-training
solution for low-data regimes. Inspired by the recent
success of the Prompting technique, we introduce a new
pre-training method that boosts QEIS models by giving
Saliency Prompt for queries/kernels. Our method contains
three parts: 1) Saliency Masks Proposal is responsible
for generating pseudo masks from unlabeled images based
on the saliency mechanism. 2) Prompt-Kernel Matching
transfers pseudo masks into prompts and injects the corre-
sponding localization and shape priors to the best-matched
kernels. 3) Kernel Supervision is applied to supply super-
vision at the kernel level for robust learning. From a practi-
cal perspective, our pre-training method helps QEIS models
achieve a similar convergence speed and comparable per-
formance with CNN-based models in low-data regimes. Ex-
perimental results show that our method significantly boosts
several QEIS models on three datasets.1
|
1. Introduction
Modern CNN models address the instance segmentation
task in an indirect way, by defining the localization prob-
lem on a large set of proposals [16], window centers [6,10],
or location-based masks [30, 33, 35]. A typical example
is Mask-RCNN [16], which generates candidate bound-
ing boxes using a well-designed region proposal network.
Although this paradigm makes localization learning eas-
*Corresponding author.
1Code: https://github.com/lifuguan/saliency prompt
19.123.54.4K-NetMask-RCNNCOCO 10%24.8305.8K-NetMask-RCNNCityscapes9.734.524.9K-NetMask-RCNNCTW150040.137.1
K-NetMask-RCNNCOCO-fullSaliency PromptImageNet SupervisedFigure 1. Performance comparison between K-Net and Mask-
RCNN. K-Net can outperform Mask-RCNN on large-scale
datasets (COCO-full). However, on small datasets (the right
three), it can not perform as well as Mask-RCNN since it’s hard
to learn localization and shape priors. Our proposed unsupervised
pre-training method based on saliency prompt not only boosts the
vanilla K-Net significantly, but also helps to achieve comparable
performance compared with Mask-RCNN.
ily optimized, it still relies on the manually-designed non-
maximum suppression (NMS) as post-processing to remove
duplicated predictions.
Based on a state-of-the-art object detection model,
DETR [27], a few Query-based End-to-end Instance Seg-
mentation (QEIS) models [7, 8, 15, 18, 32, 41] have been
proposed to perform instance segmentation in a new way.
Unlike CNN-based methods which usually require a large
set of proposals, QEIS models use dynamic queries/kernels
to automatically encode object localization knowledge with
different locations and object shapes. This design effec-
tively eliminates hand-crafted anchors and post-processing
like NMS. However, due to the intrinsic dynamic attribute,
the kernels are forced to learn general object spatial dis-
tribution and shape priors in a data-driven manner so that
they can fit any input image. This makes QEIS models re-
quire a much larger amount of training data and a much
longer training time to achieve competitive performance
with CNN-based methods. Once in low-data regimes [1],
QEIS models will encounter much more significant perfor-
mance drops than CNN-based methods, as shown in Fig-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15485
ure 1. Here we take K-Net [41] as the typical example of
QEIS models and compare it with Mask-RCNN.
That being said, the potential of QEIS models is still
enormous since once good localization and shape priors can
be learned, they can perform on par with or even outperform
CNN-based methods with a much more concise pipeline.
This makes us think about how we can help QEIS models
learn localization and shape priors quickly, especially for
low-data regimes.
A promising solution is to adopt unsupervised pre-
training, which requires no extra data and any modifica-
tion to existing model architectures. However, most exist-
ing unsupervised pre-training methods [1, 3, 5, 11, 36] are
only used for the backbone and can not benefit instance seg-
mentation prediction heads, where localization and shape
priors are exactly encoded. In the object detection field,
some works [1, 11] do pre-train a full detection architec-
ture. However, they use pseudo bounding boxes for train-
ing, many of which do not contain any object inside hence
can not generate pseudo instance masks for instance seg-
mentation. FreeSOLO [34] is the first method specifically
designed for instance segmentation. Yet it mainly focuses
on generating pseudo masks and directly using them to su-
pervise the model training. Such a way still learns the
object localization and shape priors in a data-driven man-
ner, hence requiring tedious steps to generate high-quality
pseudo masks. To address these problems, we present a
novel unsupervised pre-training method for QEIS models.
Inspired by the recent advances in Prompting in NLP and
vision tasks [12, 19, 28, 43, 44], we propose to directly in-
ject localization and shape priors into the kernels using our
proposed Saliency Prompt (SP) . The prompts are gener-
ated by saliency masks which indicate potential objects, and
then are used to decorate the kernels for injecting location
and shape knowledge.
In detail, our saliency prompt involves two essential
parts: saliency and prompt : First, a Saliency Mask
Proposal generation method is responsible for generating
saliency-level pseudo masks from unlabeled images. In-
stead of directly learning from noisy pseudo masks, we
use them to generate corresponding region features and
then achieve prompts from them. Next, a Prompt-Kernel
Matching module matches the saliency prompts to the ker-
nels and then injects the prior knowledge encoded in the
prompts into the best-matched kernels. Furthermore, we
also propose a Kernel Supervision scheme to supervise the
model learning at the kernel level to gain kernel robustness.
See Figure 2 for overview.
In our experiments, our method surpasses all the existing
unsupervised pre-training algorithms on low-data regimes
on four datasets. It can be used as a plug-and-play pre-
training step for most QEIS methods and enables faster
convergence speed and better performance without any in-crease in parameters or memory. Most importantly, our
method achieves two desiderata on downstream tasks a) it
leads to the same convergence speed as CNN-based meth-
ods. (b) it gains comparable or even better performance
than CNN-based methods on most downstream datasets.
In ablations, we find that our method shows big tolerance
to the quality of pseudo masks. As such, we can easily
achieve performance improvement without a sophisticated
and time-consuming pseudo mask generation method as in
FreeSOLO [34].
Meanwhile, it is essential that our approach has the
following significant differences from the currently pop-
ular semi-supervised methods [37]: (1) Our model is a
self-supervised method that works in ”pre-training+down-
stream task finetuning” fashion, where the domains of the
down-stream tasks are not constrained, which, in most
cases, differ from the pre-training domain. However, the
semi-supervised setting constrains all training data resid-
ing in a single domain. Otherwise, the semi-supervised
model cannot converge based on our experiments. (2) Semi-
supervised methods usually use auxiliary loss (like pseudo-
label supervision) which we don’t use. Based on the above
two-fold reasons, the semi-supervised works are incompa-
rable to ours.
|
Li_LOCATE_Localize_and_Transfer_Object_Parts_for_Weakly_Supervised_Affordance_CVPR_2023
|
Abstract
Humans excel at acquiring knowledge through observa-
tion. For example, we can learn to use new tools by watch-
ing demonstrations. This skill is fundamental for intelligent
systems to interact with the world. A key step to acquire
this skill is to identify what part of the object affords each
action, which is called affordance grounding. In this paper,
we address this problem and propose a framework called
LOCATE that can identify matching object parts across im-
ages, to transfer knowledge from images where an object
is being used (exocentric images used for learning), to im-
ages where the object is inactive (egocentric ones used to
test). To this end, we first find interaction areas and extract
their feature embeddings. Then we learn to aggregate the
embeddings into compact prototypes (human, object part,
and background), and select the one representing the object
part. Finally, we use the selected prototype to guide affor-
dance grounding. We do this in a weakly supervised manner,
learning only from image-level affordance and object la-
bels. Extensive experiments demonstrate that our approach
outperforms state-of-the-art methods by a large margin on
both seen and unseen objects.1
1Project page: https://reagan1311.github.io/locate .
|
1. Introduction
A fundamental skill of humans is learning to interact
with objects just by observing someone else performing
those interactions [5]. For instance, even if we have never
played tennis, we can easily learn where to hold the racket
just by looking at a single or few photographs of those inter-
actions. Such learning capabilities are essential for intelli-
gent agents to understand what actions can be performed on
a given object. Current visual systems often focus primarily
on recognizing what objects are in the scene (passive per-
ception), rather than on how to use objects to achieve cer-
tain functions (active interaction). To this end, a growing
number of studies [3,12,26,47] have begun to utilize affor-
dance [17] as a medium to bridge the gap between passive
perception and active interaction. In computer vision and
robotics [2, 20], affordance typically refers to regions of an
object that are available to perform a specific action, e.g., a
knife handle affords holding, and its blade affords cutting.
In this paper, we focus on the task of affordance ground-
ing, i.e, locating the object regions used for a given action.
Previous methods [10, 12, 14, 37, 39] have often treated af-
fordance grounding as a fully supervised semantic segmen-
tation task, which requires costly pixel-level annotations.
Instead, we follow the more realistic setting [32, 33, 38]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10922
Figure 2. General illustration of LOCATE. We first extract the em-
beddings under the region of interest where the exocentric interac-
tions are happening, and then split these embeddings into several
clusters. In the end, the prototype of the object-part cluster is se-
lected to supervise the egocentric affordance grounding.
where the task is learning object affordances by observing
human-object interaction images. That is, given some in-
teraction images, such as those in Fig. 2, along with the
corresponding label (e.g., “hold”), the aim is to learn af-
fordance grounding on the novel instances of that object.
This is a weakly-supervised problem setting where only the
image-level labels are given without any per-pixel annota-
tions. Concretely, given several third-person human-object
interaction images (exocentric) and one target object image
(egocentric), our goal is to extract affordance knowledge
and cues from exocentric interactions, and perform affor-
dance grounding in the egocentric view by using only affor-
dance labels.
There are several key challenges underlying the problem
of affordance grounding. The first is due to the nature of
the supervision, where only image-level affordance labels
are given, being a weakly supervised problem. Here, the
system needs to automatically reason about affordance re-
gions just from classification labels. Second, human-object
interactions often introduce heavy occlusion of object parts
by interacting humans. In other words, the object part that
the system needs to predict for a particular affordance (e.g.,
a mug handle for the “holding” affordance) in an exocentric
image can often be the part that is occluded (e.g., by hands).
Third, interactions are of great diversity. The way humans
interact with objects varies across individuals resulting in
diverse egocentric interaction images. Lastly, there is a
clear domain gap between exocentric and egocentric images
where the former have clutter, occlusion etc., and the latter
are cleaner (e.g., in Fig. 2). This makes affordance knowl-
edge transfer particularly challenging.
In this work, we propose a framework called LOCATE
that addresses these core challenges by locating the exactobject parts involved in the interaction from exocentric im-
ages and transferring this knowledge to inactive egocentric
images. Refer to Fig. 2 for the illustration. Specifically, we
first use the class activation mapping (CAM) [51] technique
to find the regions of human-object-interaction in exocen-
tric images. Despite being trained for the interaction recog-
nition task, we observe that CAM can generate good local-
ization maps for interaction regions. We then segment this
region of interest further into regions corresponding to hu-
man, object part, and background. We do this by extracting
embeddings and performing k-means clustering to obtain
several compact prototypes. Next, we automatically pre-
dict which of these prototypes corresponds to the object part
relevant to the affordance. To this end, we propose a mod-
ule named PartSelect that leverages part-aware features and
attention maps from a self-supervised vision transformer
(DINO-ViT [6]) to obtain the desired prototype. Finally,
we use the object-part prototype as a high-level pseudo su-
pervision to guide egocentric affordance grounding.
Our contributions can be summarized as follows. (1)
We propose a framework called LOCATE that extracts
affordance knowledge from weakly supervised exocentric
human-object interactions, and transfers this knowledge to
the egocentric image in a localized manner. (2) We intro-
duce a novel module termed PartSelect to pick affordance-
specific cues from human-object interactions. The extracted
information is then used as explicit supervision to guide af-
fordance grounding on egocentric images. (3) LOCATE
achieves state-of-the-art results with far fewer parameters
and faster inference speed than previous methods, and is
able to locate accurate affordance region for unseen objects.
See Fig. 1 for examples of our results and comparison to
state-of-the-art.
|
Lee_Single_View_Scene_Scale_Estimation_Using_Scale_Field_CVPR_2023
|
Abstract
In this paper, we propose a single image scale estima-
tion method based on a novel scale field representation. A
scale field defines the local pixel-to-metric conversion ratio
along the gravity direction on all the ground pixels. This
representation resolves the ambiguity in camera parame-
ters, allowing us to use a simple yet effective way to collect
scale annotations on arbitrary images from human annota-
tors. By training our model on calibrated panoramic image
data and the in-the-wild human annotated data, our sin-
gle image scene scale estimation network generates robust
scale field on a variety of image, which can be utilized in
various 3D understanding and scale-aware image editing
applications.
|
1. Introduction
Single image 3D understanding plays a significant role in
various computer vision tasks, such as AR/VR applications,
robotics, and computational 3D photography. Many recent
approaches show promising results in estimating depth [17,18, 25], scene structure [24, 32], or radiance field [20, 22,
23]. However, they often treat scale as an undetermined
factor as it is a very ill-posed problem when the physical
extrinsics of the camera is unknown. Therefore, it remains a
very challenging task to estimate the metric scale of a scene
for a single unconstrained image.
One seminal work in scale estimation is single view
metrology [6]. As detailed in this line of works [6, 10, 35],
knowing horizon line, field of view (FoV) and absolute
camera height enables the conversion between any 2D mea-
surements in image space to 3D measurements. Horizon
line and FoV can be estimated using visual features, as
many of other previous methods suggest [9, 11, 29, 30, 35].
However, since the absolute camera height information can-
not be obtained from low-level visual features, Criminisi et
al. [6], Hoiem et al. [10] and Zhu et al. [35] utilize canoni-
cal object heights as reference. Reprojecting the 2D bound-
ing boxes of well-known objects like humans or cars to 3D
space with their known metric heights, these methods derive
the vertical height of the camera, and then calculate metric
scale of other objects in the 2D image.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21435
However, there are two major downsides of single view
metrology based approaches. First, their scale estimation
relies on the existence of known objects with either precise
or canonical metric heights. This restricts practical usage
of their methods because there may or may not be an object
with known height. In addition, the height of a known ob-
ject can also change with its pose ( e.g. a sitting person vsa
standing person), making the system less reliable. Second,
their prediction of the scene scale is highly dependent on a
few global parameters like horizon line, FoV , and camera
height. Among these parameters, camera height is very dif-
ficult to directly predict based on visual features. Moreover,
models that directly predict these parameters tend to overfit
to a certain dataset where the camera height and the FoV lie
in a limited range.
In this paper, to tackle the aforementioned issues, we in-
troduce a novel way of representing scene scale. We define
a pixel-wise pixel-to-metric 2D map, called Scale Field .
Using this local and dense representation, our goal is to
train a robust and generalizable scale estimation network
to recover the scene scale. Also, we provide an efficient
pipeline to annotate scale fields on random images based
on our geometrical observations consistent with single view
metrology. This allows us to collect a diverse set of images
with varying camera angles and camera heights. Our single-
view scene scale estimation network trained on this dataset
shows robust results on a variety of scenes, and it general-
izes well even on object-centric images. With our predicted
scale field, we can perform various 3D scene understanding
and scale-aware image editing, as visualized in Fig. 1.
To summarize, our major contributions are:
• We introduce Scale Field , a novel representation of
scene scale information, which can be utilized in var-
ious 3D understanding and scale-aware image editing
tasks.
• We propose a pipeline to annotate scale fields on web
images, and collect a diverse set of training samples.
• We provide a formulation of the single image scene
scale estimation network , trained on a mixture of
training data from panoramic images and human an-
notated data. Our method shows great robustness and
generalizability on in-the-wild images.
|
Li_ImageNet-E_Benchmarking_Neural_Network_Robustness_via_Attribute_Editing_CVPR_2023
|
Abstract
Recent studies have shown that higher accuracy on Im-
ageNet usually leads to better robustness against differ-
ent corruptions. Therefore, in this paper, instead of fol-
lowing the traditional research paradigm that investigates
new out-of-distribution corruptions or perturbations deep
models may encounter, we conduct model debugging in in-
distribution data to explore which object attributes a model
may be sensitive to. To achieve this goal, we create a toolkit
for object editing with controls of backgrounds, sizes, po-
sitions, and directions, and create a rigorous benchmark
named ImageNet-E(diting) for evaluating the image clas-
sifier robustness in terms of object attributes. With our
ImageNet-E, we evaluate the performance of current deep
learning models, including both convolutional neural net-
works and vision transformers. We find that most models
are quite sensitive to attribute changes. A small change
in the background can lead to an average of 9.23% drop
⇤Corresponding author.
This research is supported in part by the National Key Research and
Development Progrem of China under Grant No.2020AAA0140000.on top-1 accuracy. We also evaluate some robust models
including both adversarially trained models and other ro-
bust trained models and find that some models show worse
robustness against attribute changes than vanilla models.
Based on these findings, we discover ways to enhance at-
tribute robustness with preprocessing, architecture designs,
and training strategies. We hope this work can provide
some insights to the community and open up a new av-
enue for research in robust computer vision. The code
and dataset are available at https://github.com/
alibaba/easyrobust .
|
1. Introduction
Deep learning has triggered the rise of artificial intel-
ligence and has become the workhorse of machine intel-
ligence. Deep models have been widely applied in vari-
ous fields such as autonomous driving [ 28], medical sci-
ence [ 33], and finance [ 38]. With the spread of these tech-
niques, the robustness and safety issues begin to be essen-
tial, especially after the finding that deep models can be
easily fooled by negligible noises [ 16]. As a result, more
researchers contribute to building datasets for benchmark-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20371
ing model robustness to spot vulnerabilities in advance.
Most of the existing work builds datasets for evaluat-
ing the model robustness and generalization ability on out-
of-distribution data [ 7,22,30] using adversarial examples
and common corruptions. For example, the ImageNet-
C(orruption) dataset conducts visual corruptions such as
Gaussian noise to input images to simulate the possible pro-
cessors in real scenarios [ 22]. ImageNet-R(enditions) con-
tains various renditions ( e.g., paintings, embroidery) of Im-
ageNet object classes [ 21]. As both studies have found that
higher accuracy on ImageNet usually leads to better robust-
ness against different domains [ 22,50]. However, most pre-
vious studies try to achieve this in a top-down way, such
as architecture design, exploring a better training strategy,
etc. We advocate that it is also essential to manage it in a
bottom-up way, that is, conducting model debugging with
the in-distribution dataset to provide clues for model repair-
ing and accuracy improvement. For example, it is interest-
ing to explore whether a bird with a water background can
be recognized correctly even if most birds appear with trees
or grasses in the training data. Though this topic has been
investigated in studies such as causal and effect analysis [ 9],
the experiments and analysis are undertaken on domain gen-
eralization datasets. How a deep model generalizes to dif-
ferent backgrounds is still unknown due to the vacancy of a
qualified benchmark. Therefore, in this paper, we provide
a detached object editing tool to conduct the model debug-
ging from the perspective of object attribute and construct a
dataset named ImageNet-E(diting).
The ImageNet-E dataset is a compact but challenging
test set for object recognition that contains controllable ob-
ject attributes including backgrounds, sizes, positions and
directions, as shown in Fig. 1. In contrast to ObjectNet [ 5]
whose images are collected by their workers via posing ob-
jects according to specific instructions and differ from the
target data distribution. This makes it hard to tell whether
the degradation comes from the changes of attribute or dis-
tribution. Our ImageNet-E is automatically generated with
our object attribute editing tool based on the original Im-
ageNet. Specifically, to change the object background, we
provide an object background editing method that can make
the background simpler or more complex based on diffusion
models [ 25,46]. In this way, one can easily evaluate how
much the background complexity can influence the model
performance. To control the object size, position, and di-
rection to simulate pictures taken from different distances
and angles, an object editing method is also provided. With
the editing toolkit, we apply it to the large-scale ImageNet
dataset [ 42] to construct our ImageNet-E(diting) dataset. It
can serve as a general dataset for benchmarking robustness
evaluation on different object attributes.
With the ImageNet-E dataset, we evaluate the perfor-
mance of current deep learning models, including both con-volutional neural networks (CNNs), vision transformers as
well as the large-scale pretrained CLIP [ 40]. We find that
deep models are quite sensitive to object attributes. For ex-
ample, when editing the background towards high complex-
ity (see Fig. 1, the 3rd row in the background part), the drop
in top-1 accuracy reaches 9.23% on average. We also find
that though some robust models share similar top-1 accu-
racy on ImageNet, the robustness against different attributes
may differ a lot. Meanwhile, some models, being robust un-
der certain settings, even show worse results than the vanilla
ones on our dataset. This suggests that improving robust-
ness is still a challenging problem and the object attributes
should be taken into account. Afterward, we discover ways
to enhance robustness against object attribute changes. The
main contributions are summarized as follows:
•We provide an object editing toolkit that can change
the object attributes for manipulated image generation.
•We provide a new dataset called ImageNet-E that can
be used for benchmarking robustness to different ob-
ject attributes. It opens up new avenues for research in
robust computer vision against object attributes.
•We conduct extensive experiments on ImageNet-E and
find that models that have good robustness on adversar-
ial examples and common corruptions may show poor
performance on our dataset.
|
Lin_Collaborative_Static_and_Dynamic_Vision-Language_Streams_for_Spatio-Temporal_Video_Grounding_CVPR_2023
|
Abstract
Spatio-Temporal Video Grounding (STVG) aims to lo-
calize the target object spatially and temporally accord-
ing to the given language query. It is a challenging task
in which the model should well understand dynamic visual
cues (e.g., motions) and static visual cues (e.g., object ap-
pearances) in the language description, which requires ef-
fective joint modeling of spatio-temporal visual-linguistic
dependencies. In this work, we propose a novel frame-
work in which a static vision-language stream and a dy-
namic vision-language stream are developed to collabora-
tively reason the target tube. The static stream performs
cross-modal understanding in a single frame and learns
to attend to the target object spatially according to intra-
frame visual cues like object appearances. The dynamic
stream models visual-linguistic dependencies across mul-
tiple consecutive frames to capture dynamic cues like mo-
tions. We further design a novel cross-stream collabora-
tive block between the two streams, which enables the static
and dynamic streams to transfer useful and complementary
information from each other to achieve collaborative rea-
soning. Experimental results show the effectiveness of the
collaboration of the two streams and our overall frame-
work achieves new state-of-the-art performance on both
HCSTVG and VidSTG datasets.
|
1. Introduction
Vision-language cross-modal understanding is a chal-
lenging yet important research problem that bridges the
communication between humans and artificial intelligence
systems. It has attracted increasing attention and many
vision-language tasks were studied in recent years, like vi-
*Corresponding author.sual grounding [13, 32], VQA [7, 31], image/video cap-
tioning [15, 40], etc. In this work, we focus on a chal-
lenging vision-language task named Spatio-Temporal Video
Grounding (STVG) which was recently proposed in [42].
Given a language query indicating an object (as shown in
Figure 1), STVG aims to localize the target object spatially
and temporally in the video. In this task, the input lan-
guage query may express different kinds of visual concepts,
thus the model requires to well capture and understand these
concepts in both vision and language modalities.
In STVG task, dynamic visual concepts like human mo-
tion and static visual concepts like object appearance are
both important for distinguishing the target object from
other objects that occurred in the same video. For exam-
ple, in the first sample in Figure 1, the two men are dressed
alike (i.e., their static appearance cues are similar), and we
can only distinguish them by motion. In the second sample,
the two women perform similar actions, they both stand up
(i.e., they have similar dynamic motion cues), here, we can
only distinguish them by their clothes. The above examples
show that static or dynamic visual cues alone cannot solve
the STVG task well. And it implies that modeling static and
dynamic visual-linguistic dependencies and collaboratively
utilizing them are important for addressing STVG task.
Humans treat static and dynamic cues differently [4,21],
but this was overlooked in previous STVG works [22, 24,
30]. Taking the first query in Figure 1 as an example, a hu-
man would randomly or evenly click on some locations on
the video’s progress bar to find candidate frames containing
a man in blue coat. And he will play the video around that
frame and attend on the candidate man to check whether he
performs the action described in the text (i.e., “turns around
and stops by the stone”). In the above process, the human
understands static and dynamic cues in different ways (i.e.,
view a single frame and watch the video clip, respectively)
and determines the target object by jointly considering the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23100
Query: The man in the blue coat turns around and stops by the stone.
Query: The woman in light gray stands up and turns.
Figure 1. Two examples for Spatio-Temporal Video Grounding task. In the first case, the two men are both dressed alike in blue, thus
understanding the action described in the query sentence is essential to recognize the person of interest. In the second case, both of the two
women stand up, we can only distinguish them by their clothes. (Best viewed zoomed in on screen.)
static and dynamic cues in an “attend-and-check” process.
Inspired by the above observations, we propose a frame-
work that consists of a static and a dynamic vision-language
(VL) streams to model static and dynamic cues, respec-
tively. And we design a novel cross-stream collaboration
block between the two streams to simulate the “attend-and-
check” process, which exchanges useful and complemen-
tary information learned in each stream and enables the two
streams to collaboratively reason the target object.
Specifically, in this work, our static vision-language
(VL) stream learns to attend to some candidate regions
according to static visual cues like appearance, while the
dynamic VL stream learns to understand the dynamic vi-
sual cues like action described in the text query. Then,
in the collaboration block, we guide the dynamic stream
to only focus and attend on the motion of the candidate
objects by using the learned attended region in the static
stream. And we transfer the text-motion matching infor-
mation learned in the dynamic stream to the static stream,
to help it further check and determine the target object and
predict a more consistent tube. With the above cross-stream
collaboration blocks, both the static and dynamic vision-
language streams can learn reciprocal information from the
other stream, which is effective for achieving more accurate
spatio-temporal grounding predictions. We conduct experi-
ments on HCSTVG [24] and VidSTG [42] datasets and our
approach outperforms previous approaches by a consider-
able margin. Ablation studies demonstrate the effectiveness
of each component in our proposed cross-stream collabo-
ration block and show its superiority over commonly used
counterparts in video understanding works [4, 21].
In summary, our contributions are: 1), we develop
an effective framework that contains two parallel streams
to model static-dynamic visual-linguistic dependencies for
complete cross-modal understanding; 2), we propose a
novel cross-stream collaboration block between the two
streams to exchange reciprocal information for each otherand enable collaborative reasoning of the target object;
3), Our overall framework achieves state-of-the-art perfor-
mances on HCSTVG [24] and VidSTG datasets [42].
|
Liang_CrowdCLIP_Unsupervised_Crowd_Counting_via_Vision-Language_Model_CVPR_2023
|
Abstract
Supervised crowd counting relies heavily on costly man-
ual labeling, which is difficult and expensive, especially
in dense scenes. To alleviate the problem, we propose a
novel unsupervised framework for crowd counting, named
CrowdCLIP . The core idea is built on two observations:
1) the recent contrastive pre-trained vision-language model
(CLIP) has presented impressive performance on various
downstream tasks; 2) there is a natural mapping between
crowd patches and count text. To the best of our knowl-
edge, CrowdCLIP is the first to investigate the vision-
language knowledge to solve the counting problem. Specif-
ically, in the training stage, we exploit the multi-modal
ranking loss by constructing ranking text prompts to match
the size-sorted crowd patches to guide the image encoder
learning. In the testing stage, to deal with the diversity
of image patches, we propose a simple yet effective pro-
gressive filtering strategy to first select the highly poten-
tial crowd patches and then map them into the language
space with various counting intervals. Extensive exper-
iments on five challenging datasets demonstrate that the
proposed CrowdCLIP achieves superior performance com-
pared to previous unsupervised state-of-the-art counting
methods. Notably, CrowdCLIP even surpasses some pop-
ular fully-supervised methods under the cross-dataset set-
ting. The source code will be available at https://
github.com/dk-liang/CrowdCLIP .
|
1. Introduction
Crowd counting aims to estimate the number of peo-
ple from images or videos in various crowd scenes, which
has received tremendous attention due to its wide applica-
tions in public safety and urban management [14, 42]. It is
very challenging to accurately reason the count, especially
*Equal contribution.†Corresponding author.
Work done when Dingkang Liang was an intern at Baidu.
Diverse crowd patchesPatches for crowd countingAmbiguous crowdpatches
(b)
Supervised methods:rely on point-level labels
Our CrowdCLIP:noneed foranylabeled image(20, 55, 90, 125…)(a)ImageEncoderImageDecoder
ImageEncoderTextEncoderSimilarity mapPredictedcountGT density map“The photocontains[prompt] people”Figure 1. (a) The supervised methods require point-level annota-
tions, which need heavy manual labor to label a large-scale dataset.
The proposed method transfers the vision-language knowledge to
perform unsupervised crowd counting without any annotation; (b)
Crowd counting aims to calculate the number of human heads,
while some crowd patches do not contain human heads, i.e., am-
biguous patches.
in dense regions where the crowd gathers.
The recent crowd counting methods [5, 18, 41, 62] at-
tempt to regress a density map (Fig. 1(a)). To train such
density-based models, point-level annotations are required,
i.e., assigning a point in each human head. However, an-
notating point-level object annotations is an expensive and
laborious process. For example, the NWPU-Crowd [48]
dataset, containing 5,109images, needs 30annotators and
3,000human hours for the entire annotation process. To
reduce the annotation cost, some weakly-supervised meth-
ods [16, 19, 59] and semi-supervised methods [22, 28] are
proposed, where the former usually adopts the count-level
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2893
annotation as supervision, and the latter uses a small frac-
tion of fully-labeled images and massive unlabeled images
for training. However, both weakly and semi-supervised
methods still need considerable label costs, especially when
annotating dense or blurry images.
Considering the above issues, a crowd counting model
that can be trained without any labeled data is worth ex-
ploring. So far, there is only one approach called CSS-
CCNN [3] for pure unsupervised crowd counting. Based
on the idea that natural crowds follow a power law dis-
tribution, CSS-CCNN ensures the distribution of predic-
tions is matched to the prior. Though the performance
of CSS-CCNN is better than the random paradigm, there
is a significant performance gap compared to the popu-
lar fully supervised methods [4, 62]. Recently, Contrastive
Language-Image Pre-Training (CLIP) as a new paradigm
has drawn increasing attention due to its powerful transfer
ability. By using large-scale noisy image-text pairs to learn
visual representation, CLIP has achieved promising perfor-
mance on various downstream vision tasks ( e.g., object de-
tection [38], semantic segmentation [56], generation [10]).
Whereas, how to apply such a language-driven model to
crowd counting has not been explored. Obviously, CLIP
cannot be directly applied to the counting task since there
is no such count supervision during the contrastive pre-
training of CLIP.
A natural way to exploit the vision-language knowledge
is to discretize the crowd number into a set of intervals,
which transfers the crowd counting to a classification in-
stead of a regression task. Then one can directly calcu-
late the similarity between the image embedding from the
image encoder and the text embedding from the text en-
coder and choose the most similar image-text pair as the
prediction count (called zero-shot CLIP). However, we re-
veal that the zero-shot CLIP reports unsatisfactory perfor-
mance, attributed to two crucial reasons: 1) The zero-shot
CLIP can not well understand crowd semantics since the
original CLIP is mainly trained to recognize single-object
images [38]; 2) Due to the non-uniform distribution of the
crowd, the image patches are of high diversity while count-
ing aims to calculate the number of human heads within
each patch. Some crowd patches that do not contain human
heads may cause ambiguity to CLIP, as shown in Fig. 1(b).
To relieve the above problems, in this paper, we propose
CrowdCLIP, which adapts CLIP’s strong vision-category
correspondence capability to crowd counting in an unsu-
pervised manner, as shown in Fig. 1(a). Specifically, first,
we construct ranking text prompts to describe a set of size-
sorted image patches during the training phase. As a re-
sult, the image encoder can be fine-tuned to better capture
the crowd semantics through the multi-modal ranking loss.
Second, during the testing phase, we propose a simple yet
effective progressive filtering strategy consisting of threestages to choose high-related crowd patches. In particular,
the first two stages aim to choose the high-related crowd
patches with a coarse-to-fine classification paradigm, and
the latest stage is utilized to map the corresponding crowd
patches into an appropriate count. Thanks to such a progres-
sive inference strategy, we can effectively reduce the impact
of ambiguous crowd patches.
Extensive experiments conducted on five challenging
datasets in various data settings demonstrate the effective-
ness of our method. In particular, our CrowdCLIP signifi-
cantly outperforms the current unsupervised state-of-the-art
method CSS-CCNN [3] by 35.2%on the challenging UCF-
QNRF dataset in terms of the MAE metric. Under cross-
dataset validation, our method even surpasses some popular
fully-supervised works [40, 62].
Our major contributions can be summarized as follows:
1) In this paper, we propose a novel unsupervised crowd
counting method named CrowdCLIP, which innovatively
views crowd counting as an image-text matching problem.
To the best of our knowledge, this is the first work to trans-
fer vision-language knowledge to crowd counting. 2) We
introduce a ranking-based contrastive fine-tuning strategy
to make the image encoder better mine potential crowd se-
mantics. In addition, a progressive filtering strategy is pro-
posed to choose the high-related crowd patches for mapping
to an appropriate count interval during the testing phase.
|
Kim_Event-Based_Video_Frame_Interpolation_With_Cross-Modal_Asymmetric_Bidirectional_Motion_Fields_CVPR_2023
|
Abstract
Video Frame Interpolation (VFI) aims to generate in-
termediate video frames between consecutive input frames.
Since the event cameras are bio-inspired sensors that only
encode brightness changes with a micro-second temporal
resolution, several works utilized the event camera to en-
hance the performance of VFI. However, existing methods
estimate bidirectional inter-frame motion fields with only
events or approximations, which can not consider the com-
plex motion in real-world scenarios. In this paper, we
propose a novel event-based VFI framework with cross-
modal asymmetric bidirectional motion field estimation. In
detail, our EIF-BiOFNet utilizes each valuable charac-
teristic of the events and images for direct estimation of
inter-frame motion fields without any approximation meth-
ods. Moreover, we develop an interactive attention-based
frame synthesis network to efficiently leverage the comple-
mentary warping-based and synthesis-based features. Fi-
nally, we build a large-scale event-based VFI dataset, ERF-
X170FPS, with a high frame rate, extreme motion, and
dynamic textures to overcome the limitations of previous
event-based VFI datasets. Extensive experimental results
validate that our method shows significant performance im-
provement over the state-of-the-art VFI methods on vari-
ous datasets. Our project pages are available at: https:
//github.com/intelpro/CBMNet
|
1. Introduction
Video frame interpolation (VFI) is a long-standing prob-
lem in computer vision that aims to increase the temporal
resolution of videos. It is widely applied to various fields
ranging from SLAM, object tracking, novel view synthe-
sis, frame rate up-conversion, and video enhancement. Due
to its practical usage, many researchers are engaged in en-
hancing the performance of video frame interpolation. Re-
cently, numerous deep learning-based VFI methods have
been proposed and recorded remarkable performance im-
provements in various VFI datasets. Specifically, numerous
PSNR 21.58 dB
(d) ABMEFrames + Events
(a) Inputs (Overlay)
PSNR 35.02 dB
(f) Ours
PSNR 14.78 dB
(c) BMBC
PSNR 16.29 dB
(b) SuperSloMo
PSNR 25.46 dB
(e) TimeLens
Figure 1. Qualitative comparison on the warped frame of inter-
frame motion fields. (b) and (c) estimate symmetrical inter-frame
motion fields. (d) and (e) estimate asymmetric motion fields using
only images and events, respectively. (f) Ours shows the best re-
sults using cross-modal asymmetric bidirectional motion fields.
motion-based VFI methods [3, 4, 8, 12, 22, 29, 30] are pro-
posed thanks to the recent advance in motion estimation al-
gorithms [13, 14, 16, 23, 39, 41]. For the inter-frame motion
field estimation, the previous works [3, 12, 29] estimate the
optical flows between consecutive frames and approximate
intermediate motion fields [12, 29, 49] using linear [12, 29]
or quadratic [49] approximation assumptions. These meth-
ods often estimate the inaccurate inter-frame motion fields
when the motions between frames are vast or non-linear,
adversely affecting the VFI performance.
Event cameras, novel bio-inspired sensors, can cap-
ture blind motions between frames with high temporal res-
olutions since they asynchronously report the brightness
changes of the pixels at the micro-second level. There-
fore, recent event-based VFI methods [9, 19, 42, 43, 47, 51]
have tried to leverage the advantages of the events to esti-
mate the inter-frame motion field. However, they utilized
only the stream of events to estimate motion fields [9,43] or
used approximation methods [42, 47, 51], resulting in sub-
optimal motion field estimation. The first reason is that the
nature of events is sparse and noisy, and all the brightness
changes are not recorded in the stream, leading to inaccu-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18032
rate results. Second, the approximation-based methods can
not completely express the motion model of complex real-
world motion fields. Lastly, the previous works do not fully
take advantage of the modality of images providing dense
visual information; the previous works do not consider a
remarkable distinction between the two modalities, such as
simply concatenating two modality features [42].
To solve the problem, we propose a novel EIF-
BiOFNet( Event- Image Fusion Bidirectional Optical Flow
Network) for directly estimating asymmetrical inter-frame
motion fields, elaborately considering the characteristics
of the events and frames without the approximation meth-
ods. Our EIF-BiOFNet leverage the following properties
of events and images in the inter-frame motion field estima-
tion. Since the events are triggered near the object edges and
provide abundant motion trajectory information, the event-
level motion fields tend to be accurately estimated near the
motion boundaries. However, it can not give precise results
in other areas where the events are not triggered. On the
other hand, the image-based motion fields can utilize dense
visual information, but they can not employ the trajectory
of motion, unlike the events. The proposed EIF-BiOFNet
elaborately supplements image-level and event-level mo-
tion fields to estimate more accurate results. As a result, the
EIF-BiOFNet significantly outperforms previous methods,
as shown in Fig. 1.
For the intermediate frame synthesis, recent event-based
VFI methods [9, 42, 43, 47] are built on CNNs. However,
most CNN-based frame synthesis methods have a weak-
ness in long-range pixel correlation due to the limited re-
ceptive field size. To this end, we propose a new Inter-
active Attention-based frame synthesis network to leverage
the complementary warping - and synthesis -based features.
Lastly, we propose a novel large-scale ERF-X170FPS
dataset with an elaborated beam-splitter setup for event-
based VFI research community. Existing event-based VFI
datasets have several limitations, such as not publicly avail-
able train split, low frame rate, and static camera move-
ments. Our dataset contains a higher frame rate (170fps),
higher resolution (1440 ×975), and more diverse scenes
compared to event-based VFI datasets [42] where both train
and test splits are publicly available.
In summary, our contributions are four fold: (I) We pro-
pose a novel EIF-BiOFNet for estimating the asymmetric
inter-frame motion fields by elaborately utilizing the cross-
modality information. (II) We propose a new Interactive
Attention-based frame synthesis network that efficiently
leverages the complementary warping- and synthesis-based
features. (III) We propose a novel large-scale event-based
VFI dataset named ERF-X170FPS . (IV) With these whole
setups, we experimentally validate that our method signif-
icantly outperforms the SoTA VFI methods on five bench-
mark datasets, including the ERF-X170FPS dataset. Specif-ically, we recorded unprecedented performance improve-
ment compared to SoTA event-based VFI method [43] by
7.9dB (PSNR) in the proposed dataset.
|
Liu_Promoting_Semantic_Connectivity_Dual_Nearest_Neighbors_Contrastive_Learning_for_Unsupervised_CVPR_2023
|
Abstract
Domain Generalization (DG) has achieved great success
in generalizing knowledge from source domains to unseen
target domains. However, current DG methods rely heav-
ily on labeled source data, which are usually costly and
unavailable. Since unlabeled data are far more accessible,
we study a more practical unsupervised domain generaliza-
tion (UDG) problem. Learning invariant visual representa-
tion from different views, i.e., contrastive learning, promises
well semantic features for in-domain unsupervised learning.
However, it fails in cross-domain scenarios. In this paper,
we first delve into the failure of vanilla contrastive learn-
ing and point out that semantic connectivity is the key to
UDG. Specifically, suppressing the intra-domain connectiv-
ity and encouraging the intra-class connectivity help to learn
the domain-invariant semantic information. Then, we pro-
pose a novel unsupervised domain generalization approach,
namely Dual Nearest Neighbors contrastive learning with
strong Augmentation (DN2A). Our DN2A leverages strong
augmentations to suppress the intra-domain connectivity
and proposes a novel dual nearest neighbors search strategy
to find trustworthy cross domain neighbors along with in-
domain neighbors to encourage the intra-class connectivity.
Experimental results demonstrate that our DN2A outper-
forms the state-of-the-art by a large margin, e.g., 12.01%
and 13.11% accuracy gain with only 1% labels for linear
evaluation on PACS and DomainNet, respectively.
|
1. Introduction
Deep learning methods have yielded prolific results in
various tasks in recent years. However, they are tailored for
experimental cases, where training and testing data share the
same distribution. When transferred to practical applications,
these methods perform poorly on out-of-distribution data
due to domain shifts [27, 30]. To tackle this issue, domain
generalization (DG) methods [25, 35] are proposed to learn
*Corresponding author: Wenrui Dai.†Equal contribution.
Figure 1. (a) t-SNE visualization of unsupervised features learned
by SimCLR and our DN2A on PACS. (b) Grad-cam visualization
of linear probing for SimCLR and ours with 10% labeled data.
transferable knowledge from multiple source domains to
generalize on unseen target domains. Despite the promising
results of DG, they are restricted to supervised training with
large amounts of labeled source data. However, large-scale
labeled source data are often unavailable due to the laborious
and expensive annotation capture, while unlabeled data are
far more accessible. Thus, we study the more practical
unsupervised domain generalization (UDG) [34] problem to
learn domain-invariant features in an unsupervised fashion.
Recent advances in unsupervised learning prefer con-
trastive learning (CL) [2, 14, 28, 32], which learns semantic
representation by enforcing similarity over different augmen-
tations of the same image. However, most CL methods are
designed for i.i.d. datasets and can hardly accommodate the
cross-domain scenario in UDG. As depicted in Fig. 1, vanilla
SimCLR fails to learn domain-invariant semantic features
but learns domain-biased features. For further understand-
ing, we dive into this phenomenon and propose the semantic
connectivity for UDG to measure the intra-domain and intra-
class similarity. From augmentation graph view [12, 31],
semantic connectivity is the support overlap of augmented
samples within the same semantic class. We further find
that the degraded semantic connectivity is responsible for
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3510
the failure of vanilla CL in UDG, which is reflected in two
folds, i.e., large intra-domain connectivity and small intra-
class connectivity. Positive samples generated by standard
augmentations under the i.i.d. hypothesis share too much
domain-relevant information, which induces the model to
learn domain-related features for alignment, resulting in
large intra-domain connectivity. Moreover, it is hard to cap-
ture domain invariance via handcrafted transformations due
to significant distribution shifts across domains. For example,
one can hardly transform a cat from sketch to photo. Small
intra-class connectivity occurs in cross-domain scenarios.
To address these issues, we propose to suppress the intra-
domain connectivity and enhance intra-class connectivity.
First, we leverage strong augmentations to generate positive
samples with a small amount of shared information, where
the domain nuisance information is suppressed. The sup-
pressed domain-related information decreases intra-domain
connectivity, and the learned unsupervised representation
can achieve a higher degree of invariance against domain
shifts. Besides, we employ cross domain nearest neighbors
(NN) as positive samples to impose the domain invariance by
enforcing the similarity between cross domain samples po-
tentially belonging to the same category, which can increase
the cross-domain intra-class connectivity. In addition, we
improve cross domain NN by a dual NN strategy that further
introduces in-domain NN as positives to overcome the intra-
domain variances and increase the intra-domain intra-class
connectivity. For cross-domain NNs, a direct search may
result in many false matches, due to distribution shifts across
domains. Since searching NN within a domain without dis-
tribution shift is more accurate than across domains, we
propose a novel Cross Domain Double-lock NN (CD2NN)
search strategy that employs more accurate in-domain NN as
a mediator to find more trustworthy cross domain neighbors
for boosting the performance. For in-domain NN, since di-
rect searching may fail to find sufficiently diverse samples to
overcome intra-domain variances, we resort to more distinct
cross domain NN as a mediator to find more diverse neigh-
bors, namely In-domain Cycle NN (ICNN). Totally, our dual
nearest neighbors, i.e., CD2NN and ICNN, can increase the
intra-class connectivity for UDG. In a nutshell, contributions
of this paper are summarized as:
•We propose a novel semantic connectivity metric to
indicate the inherent problem of contrastive learning in
UDG, and propose a novel method DN2A to increase
the semantic connectivity with theoretical guarantees.
•We propose to leverage strong augmentations to sup-
press the intra-domain connectivity and use cross do-
main neighbors as positive samples to increase intra-
class connectivity by enforcing the similarity over cross
domain samples potentially from the same category.
•We propose a novel cross domain double-lock near-
est neighbors search strategy to find more trustwor-thy cross domain neighbors and improve it by a novel
in-domain cycle nearest neighbors search strategy to
further boost the semantic connectivity.
Experiments show our DN2A outperforms state-of-the-
art methods by a large margin, e.g., 12.01% and 13.11%
accuracy gains with only 1% labels for linear evaluation on
PACS and DomainNet, respectively. Besides, with less than
4% samples compared to ImageNet for training, our method
outperforms ImageNet pretraining, showing a promising way
to initialize models for the DG problem.
|
Liu_Ambiguity-Resistant_Semi-Supervised_Learning_for_Dense_Object_Detection_CVPR_2023
|
Abstract
With basic Semi-Supervised Object Detection (SSOD)
techniques, one-stage detectors generally obtain limited
promotions compared with two-stage clusters. We experi-
mentally find that the root lies in two kinds of ambiguities:
(1) Selection ambiguity that selected pseudo labels are less
accurate, since classification scores cannot properly rep-
resent the localization quality. (2) Assignment ambiguity
that samples are matched with improper labels in pseudo-
label assignment, as the strategy is misguided by missed
objects and inaccurate pseudo boxes. To tackle these prob-
lems, we propose a Ambiguity-Resistant Semi-supervised
Learning (ARSL) for one-stage detectors. Specifically, to
alleviate the selection ambiguity, Joint-Confidence Estima-
tion (JCE) is proposed to jointly quantifies the classification
and localization quality of pseudo labels. As for the as-
signment ambiguity, Task-Separation Assignment (TSA) is
introduced to assign labels based on pixel-level predictions
rather than unreliable pseudo boxes. It employs a ’divide-
and-conquer’ strategy and separately exploits positives for
the classification and localization task, which is more ro-
bust to the assignment ambiguity. Comprehensive experi-
ments demonstrate that ARSL effectively mitigates the am-
biguities and achieves state-of-the-art SSOD performance
on MS COCO and PASCAL VOC. Codes can be found at
https://github.com/PaddlePaddle/PaddleDetection.
|
1. Introduction
Abundant data plays an essential role in deep learning
based object detection [18, 22, 23], yet labeling a large
amount of annotations is labour-consuming and expensive.
To save labeling expenditure, Semi-Supervised Object De-
tection (SSOD) attempts to leverage limited labeled data
Co-first author (Equal Contribution).
yCorresponding author.
This work was done when Chang Liu was an intern at Baidu Inc.
Figure 1. Comparing FCOS, Faster RCNN, and our approach on
COCO train2017 . Under the basic SSOD pipeline, FCOS obtains
limited improvements compared with Faster RCNN. Our approach
consistently promotes FCOS and achieves a state-of-the-art perfor-
mance on SSOD.
and easily accessible unlabeled data for detection tasks. Ad-
vanced SSOD methods [19, 33] follow the Mean-Teacher
[29] paradigm and mainly apply the self-training [11, 32]
technique to perform semi-supervised learning. Though this
pipeline has successfully promoted two-stage detectors, it
is less harmonious with one-stage methods which are also
important due to their competitive accuracy and computa-
tional efficiency. As verified in Fig. 1, compared with Faster
RCNN [23], FCOS [30] has a comparable supervised per-
formance, but achieves a relatively limited improvement un-
der the basic semi-supervised pipeline. To figure out this
problem, we analyze the core components of SSOD, e.g.,
pseudo-label selection and assignment.
With comprehensive investigations in Sec. 3.2, we find
that there exist selection and assignment ambiguities, hin-
dering the semi-supervised learning of one-stage detectors.
The selection ambiguity denotes that the selected pseudo
labels for unlabeled images are less accurate. It is caused
by the mismatch between classification scores and local-
ization quality. Specifically, compared with Faster RCNN,
FCOS has a much smaller Pearson correlation coefficient
between classification and localization (0.279 vs. 0.439),
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15579
Figure 2. An overview of our Ambiguity-Resistant Semi-supervised Learning. Training batch contains both labeled and unlabeled images.
On unlabeled images, the teacher first predicts the joint confidence via JCE. Then, TSA assigns and generates the training targets for the
student. PPM denotes the potential positive mining in TSA. The overall loss consists of supervised Lsand unsupervised loss Lu.
which is adverse to the pseudo-label selection. The reason
is that one-stage detectors like FCOS lack RPN [23] and
RoI Pooling/Align [6, 23] to extract accurate object infor-
mation for localization quality estimation, meanwhile the
predicted centerness of FCOS cannot properly represent the
localization quality.
On the other hand, the assignment ambiguity indicates
that samples on unlabeled images are assigned with im-
proper labels. Our experiments show that 73.5% of posi-
tives are wrongly matched with negative labels, and there
also exist many false positives. In essence, the assignment
strategy converts bounding boxes into pixel-level labels, but
neglects the situations that many pseudo boxes are inaccu-
rate and plenty of objects are missed due to the threshold fil-
tering. It causes the assignment ambiguity which misguides
the detector. Compared with two-stage detectors, one-stage
detectors which require pixel-level labels, are more sensi-
tive to the assignment ambiguity.
Based on these observations and analysis, we pro-
pose the Ambiguity-Resistant Semi-supervised Learn-
ing(ARSL) for one-stage detectors. To mitigate the se-
lection ambiguity, Joint-Confidence Estimation (JCE) is
proposed to select high-quality pseudo labels based on the
joint quality of classification and localization. Specifically,
JCE employs a double-branch structure to estimate the con-
fidence of the two tasks, then combines them to format
the joint confidence of detection results. In training, the
two branches are trained together in united supervision to
avoid the sub-optimal state. Different from other task-
consistent or IoU-estimation methods [5,9,15], JCE explic-
itly integrates the classification and localization quality, and
does not need complicated structures and elaborate learn-
ing strategies. Additionally, JCE is more capable of pickinghigh-quality pseudo labels and achieves a better SSOD per-
formance, as verified in Sec. 4.4.
As for the assignment ambiguity, Task-Separation As-
signment (TSA) is proposed to assign labels based on pixel-
level predictions rather than unreliable pseudo boxes. Con-
cretely, based on the predicted joint confidence, TSA parti-
tions samples into negatives, positives, and ambiguous can-
didates via the statistics-based thresholds. The confident
positives are trained on both classification and localization
tasks, since they are relatively accurate and reliable. While
for the ambiguous candidates, TSA employs a ’divide-and-
conquer’ strategy and separately exploits potential posi-
tives from them for the classification and localization task.
Compared with other dense-guided assignments [12,28,36],
TSA adopts a more rational assignment metric and sepa-
rately exploits positives for the two tasks, which can ef-
fectively mitigate the assignment ambiguity as proved in
Sec. 4.4. The general structure of ARSL is illustrated in
Fig. 2, and our contributions are summarized as follows:
Comprehensive experiments are conducted to analyze
the semi-supervised learning of one-stage detectors,
and reveal that the limitation lies in the selection and
assignment ambiguities of pseudo labels.
JCE is proposed to mitigate the selection ambiguity by
jointly quantifying the classification and localization
quality. To alleviate the assignment ambiguity, TSA
separately exploits positives for the classification and
localization task based on pixel-level predictions.
ARSL exhibits remarkable improvement over the ba-
sic SSOD baseline for one-stage detectors as shown
in Fig. 1, and achieves state-of-the-art performance on
MS COCO and PASCAL VOC.
15580
|
Kotovenko_Cross-Image-Attention_for_Conditional_Embeddings_in_Deep_Metric_Learning_CVPR_2023
|
Abstract
Learning compact image embeddings that yield seman-
tic similarities between images and that generalize to un-
seen test classes, is at the core of deep metric learning
(DML). Finding a mapping from a rich, localized image
feature map onto a compact embedding vector is challeng-
ing: Although similarity emerges between tuples of images,
DML approaches marginalize out information in an individ-
ual image before considering another image to which simi-
larity is to be computed.
Instead, we propose during training to condition the em-
bedding of an image on the image we want to compare it to.
Rather than embedding by a simple pooling as in standard
DML, we use cross-attention so that one image can iden-
tify relevant features in the other image. Consequently, the
attention mechanism establishes a hierarchy of conditional
embeddings that gradually incorporates information about
the tuple to steer the representation of an individual image.
The cross-attention layers bridge the gap between the origi-
nal unconditional embedding and the final similarity and al-
low backpropagtion to update encodings more directly than
through a lossy pooling layer. At test time we use the re-
sulting improved unconditional embeddings, thus requiring
no additional parameters or computational overhead. Ex-
periments on established DML benchmarks show that our
cross-attention conditional embedding during training im-
proves the underlying standard DML pipeline significantly
so that it outperforms the state-of-the-art.
|
1. Introduction
Deep metric learning (DML) seeks embeddings that al-
low a predefined distance metric to not only express se-
mantic similarities between training samples, but to also
transfers to unseen classes. The ability to learn compact
image representations that generalize well and transfer in
zero-shot manner to unseen test data distributions is crucial
for a wide range of visual perception tasks such as visual
retrieval [51, 63], image classification [44, 80, 88], cluster-
ing [7, 28], or person (re-)identification [11, 27, 69].
DML research has investigated important questions likethe effective mining of training samples [4, 54, 57, 65, 68],
the training loss function [51,53,68,77,81,82], and ensem-
ble strategies [18, 22, 55, 63, 86]. However, learning pow-
erful embeddings is by definition a challenging problem:
we seek a mapping from a rich local feature encoding that
projects this tensor with all its comprehensive spatial in-
formation and local details onto a compact vector that acts
as a holistic embedding for an entire image. Local details
have to be aggregated and all the important spatial interre-
lations in an image, e.g., the spatial composition of a scene
or the relative configuration of different body parts to an-
other, have to be summed up in a mere vector. However,
image similarity is multi-modal in nature—two images can
be similar with respect to one characteristic but different
in light of another. The challenge is consequently to learn
which local details to marginalize out and which to preserve
when the embedding function only sees one image and not
also the one we want to compute its similarity to. However,
during training we have access to all images and power-
ful loss functions such as multi-similarity loss [82] already
compare all image tuples. Thus, we could significantly sim-
plify learning the embedding by conditioning it on another
image that we then compute the similarity to.
Contributions: During training we therefore compute
similarities using conditional embeddings of the image we
want to represent conditioned on another image we want to
compare against. Thus, the second image focuses the atten-
tion of the embedding function on characteristics that are
meaningful for a subsequent comparison. Rather then ap-
plying a mere pooling operation, we utilize cross-attention
to project standard image feature encodings (such as a
ResNet convolutional feature map [26]) onto an embedding
vector while conditioning it on an embedding of the other
image. Repeating these cross-attention blocks then cre-
ates a hierarchy of conditional embeddings by successively
adding the conditioning information and gradually transi-
tioning from the challenging unconditional embedding to
the more accessible conditional one. The hierarchy there-
fore divides the difficult problem of learning an embedding
into several smaller steps. Moreover, due to cross-attention
error backpropagation from the similarity measure can now
directly update the image encoding and the embeddings
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11070
rather than having to optimize the encoding only through
the pooling operation of the embedding, which risks atten-
uated gradients. Consequently the encodings and uncondi-
tional embeddings improve over their counterparts in clas-
sical DML training. During inference, we therefore employ
these unconditional representations so that the approach af-
ter training works just like standard DML with no addi-
tional parameters and no extra computational costs, sim-
ply encoding individual images using a ResNet feature en-
coder followed by a standard embedding network that out-
puts the usual pooled feature vector. Our experimental eval-
uation shows that through cross-attention, our conditional
as well as the underlying unconditional embeddings signif-
icantly improve over the embeddings obtained by DML so
far. Moreover, the computational overhead during training
is negligible compared to the costs of current DML training.
|
Karpikova_FIANCEE_Faster_Inference_of_Adversarial_Networks_via_Conditional_Early_Exits_CVPR_2023
|
Abstract
Generative DNNs are a powerful tool for image synthe-
sis, but they are limited by their computational load. On
the other hand, given a trained model and a task, e.g. faces
generation within a range of characteristics, the output im-
age quality will be unevenly distributed among images with
different characteristics. It follows, that we might restrain
the model’s complexity on some instances, maintaining a
high quality. We propose a method for diminishing compu-
tations by adding so-called early exit branches to the orig-
inal architecture, and dynamically switching the computa-
tional path depending on how difficult it will be to render the
output. We apply our method on two different SOTA models
performing generative tasks: generation from a semantic
map, and cross-reenactment of face expressions; showing it
is able to output images with custom lower-quality thresh-
olds. For a threshold of LPIPS ≤0.1, we diminish their
computations by up to a half. This is especially relevant
for real-time applications such as synthesis of faces, when
quality loss needs to be contained, but most of the inputs
need fewer computations than the complex instances.
|
1. Introduction
Image synthesis by generative adversarial networks
(GANs) received great attention in the last years [64, 71],
its applications span from image-to-image translation [35]
to text-to-image rendering [21], neural head avatars gener-
ation [17] and many more. However, this approach suffers
from heavy computational burdens when challenged with
producing photo-realistic images. Our work stems from the
observation that deep neural networks (DNNs) output im-
ages with different but consistent quality when conditioned
on certain parameters. Since their expressivity is uneven
within the set of possibly generated images, it follows that
for some examples, a simpler DNN may suffice in generat-
*These authors contributed equally to this work
†[email protected] an output with the required quality.
On the other hand, approaches aimed at easing the
heavy computational load of DNNs have been applied with
great results, significantly decreasing redundant computa-
tions [2,13]. While strategies such as pruning [46,53,65] or
knowledge distillation [4,9,26] generate a DNN with fewer
parameters, early exit (EE) [42,78] is a setup that allows for
dynamic variation of the computational burden, and there-
fore presents itself as an ideal candidate for an image gen-
eration strategy aimed at outputting pictures of consistent
quality, while avoiding excessive computation due to their
irregular rendering difficulty.
Despite this, implementing EE strategies has remained
out of the scope of studies on generative models. This is
perhaps due to the fact that EE processes logits of inter-
mediate layers, thus restricting their field of application to
tasks where the latter are meaningful ( e.g. in classification),
while excluding pipelines in which a meaningful output is
given only at the last layer ( e.g. generative convolutional
networks).
We propose a method that employs an EE strategy for
image synthesis, dynamically routing the computational
flow towards the needed exit in accordance to pictures’
complexity, therefore reducing computational redundancy
while maintaining consistent quality. To accomplish this,
we employ three main elements, which constitute the novel
contributions of our work.
First, we attach exit branches to the original DNN (re-
ferred as the backbone), as portrayed in Fig. 1. These
branches are built of lightweight version of the modules
constituting the backbone architecture, their complexity can
be tuned in accordance with the desired quality-cost rela-
tion. Their depth ( i.e. number of modules) varies in ac-
cordance to the number of backbone modules left after the
point they get attached to. In this way, intermediate back-
bone logits are fairly processed.
In second place, we make use of a small database of fea-
tures, from which guiding examples are selected and used to
condition image generation by concatenating them to the in-
put of each branch. These features are obtained by process-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12032
ing a selection of images by the first layers of the backbone.
Its presence yields a quality gain for earlier exits, at the ex-
pense of a small amount of memory and computations, thus
harmonizing exits’ output quality. This is extremely handy
for settings where real-time rendering is needed and guid-
ing examples can be readily provided, such as neural avatar
generation.
Lastly, the third component of our workflow is a predic-
tor, namely a DNN trained on the outputs of our branches,
and capable of indicating the exit needed for outputting an
image of a given quality. This element is fundamental for
ensuring a consistent lower-quality threshold, as we will
see.
Our method is applicable to already trained models, but
requires additional training for the newly introduced com-
ponents. We report its application to two distinct tasks
of the image synthesis family, namely generation from a
semantic map, and cross-reenactment of face expressions.
Our main result may be summarized in this way: the method
is easily applicable to already existing and trained genera-
tive models, it is capable of outputting images with custom
lower-quality threshold by routing easier images to shorter
computational paths, and the mean gain in terms of saved
computations per quality loss is, respectively, 1.2×103,
and1.3×103GFLOPs/LPIPS for the two applications.
|
Lamb_Fantastic_Breaks_A_Dataset_of_Paired_3D_Scans_of_Real-World_CVPR_2023
|
Abstract
Automated shape repair approaches currently lack ac-
cess to datasets that describe real-world damaged geome-
try. We present Fantastic Breaks (and Where to Find Them:
https://terascale-all-sensing-research-
studio.github.io/FantasticBreaks ), a dataset
containing scanned, waterproofed, and cleaned 3D meshes
for 150 broken objects, paired and geometrically aligned
with complete counterparts. Fantastic Breaks contains class
and material labels, proxy repair parts that join to broken
meshes to generate complete meshes, and manually anno-
tated fracture boundaries. Through a detailed analysis of
fracture geometry, we reveal differences between Fantastic
Breaks and synthetic fracture datasets generated using ge-
ometric and physics-based methods. We show experimental
shape repair evaluation with Fantastic Breaks using mul-
tiple learning-based approaches pre-trained with synthetic
datasets and re-trained with subset of Fantastic Breaks .
|
1. Introduction
Damage to objects is an expected occurrence of every-
day real-world usage. However, when damage occurs, ob-
jects that could be repaired are often thrown out. Addi-
tive manufacturing techniques are rapidly becoming acces-
sible at the consumer level, with 3D printing technologies
available for materials such as plastics, metals, and even
ceramics and wood. Though current approaches for re-
pair have been largely manual and restricted to niche ar-
eas such as cultural heritage restoration, a large body of
recent research has emerged on the automated reversal of
damage, including reassembly of fractured parts using 3D
scans [4,9,17,19,21–23,38,39,51,53,54], or generation of
new repair parts when portions of the original object are ir-
retrievably lost due to damage [18, 20, 28–30, 32, 35, 40, 47,
48]. Geometry-driven approaches based on shape match-
ing [4,17,18,21–23,27,32,35,38–40,47,48] that are not us-
able for objects of unknown complete geometry have given
Figure 1. We present Fantastic Breaks , a dataset of 3D scans of
real-world broken objects (top) aligned to 3D scans of complete
counterparts (bottom). Objects span classes such as mugs, plates,
statues, jars, and bowls—household items prone to damage.
way to learning-driven approaches [9, 19, 20, 28–30, 51, 53]
aimed at generalization to repair at a large scale.
However, a principal challenge limiting understanding of
real-world damage and limiting application-focused evalua-
tion of repair approaches is that datasets of real-world dam-
age for consumer space objects are virtually non-existent.
Current learning-driven approaches for repair use datasets
where fracture-based damage is synthetically generated us-
ing geometric approaches such as Boolean operations with
primitives [9, 18, 28–31]. As they make assumptions about
the fracture process rather than using data-driven fracture
generation, the practical usability of such geometric meth-
ods for repairing real fractures is unknown. With Breaking
Bad, Sell ´an et al. [44] have taken a first step toward large-
scale fracture dataset generation. Breaking Bad consists of
3D shapes from Thingi10k [55] and PartNet [37] subjected
to physics-based damage using fracture modes [45]. By re-
moving macro-scale shape assumptions embodied by geo-
metric primitives, Breaking Bad is a promising step for re-
search in shape assembly and repair. However, fractures in
Breaking Bad suffer from typical issues of resolution and
simulation time step size that underlie physics simulations.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4681
The dataset is thus unfortunately hindered in providing a
faithful representation of real-world damage.
In this work, we contribute Fantastic Breaks , the first
dataset of 3D scans of damaged objects paired with 3D
scans of their complete non-damaged counterparts, as
shown in Figure 1. Each damaged object—hereafter re-
ferred to a broken object due to the nature of damage suf-
fered—is 3D scanned and geometrically registered to a scan
of the undamaged complete object, such that intact regions
of the complete and broken scans are aligned. At the time of
publication, the dataset contains 150 broken/complete pairs.
Our damage infliction often leaves one broken part in-
tact, destroying or over-fragmenting the remainder of the
object. We use an off-the-shelf subtraction-based ap-
proach [27] to generate repair part proxies from the intact
broken part aligned to its complete counterpart. We also
provide manually annotated class labels, material labels,
and annotated fractured regions on the broken meshes. Our
work emulates endeavors of groups in vision and robotics
that contribute datasets of 3D scanned everyday-use ob-
jects [5, 6, 13, 25, 46]. Our analysis of real-world fracture
properties in Fantastic Breaks reveals fine-scale fracture
structure that enables our dataset to overcome the draw-
backs of synthetic datasets. We use our dataset to evaluate
existing shape repair approaches on real fractured objects.
We summarize our contributions as follows:
1. We contribute the first 3D scanned real-world dataset
of geometrically aligned broken/complete object pairs
to enable application-focused evaluation of repair.
2. We provide class, material, and fracture surface anno-
tations, and ground truth repair part proxies.
3. We contribute a geometric analysis of Fantastic Breaks
in comparison to existing synthetic fracture datasets.
4. We provide evaluations of existing shape repair ap-
proaches using Fantastic Breaks .
|
Lee_Paired-Point_Lifting_for_Enhanced_Privacy-Preserving_Visual_Localization_CVPR_2023
|
Abstract
Visual localization refers to the process of recovering
camera pose from input image relative to a known scene,
forming a cornerstone of numerous vision and robotics sys-
tems. While many algorithms utilize sparse 3D point cloud
of the scene obtained via structure-from-motion (SfM) for
localization, recent studies have raised privacy concerns by
successfully revealing high-fidelity appearance of the scene
from such sparse 3D representation. One prominent ap-
proach for bypassing this attack was to lift 3D points to
randomly oriented 3D lines thereby hiding scene geome-
try, but latest work have shown such random line cloud
has a critical statistical flaw that can be exploited to break
through protection. In this work, we present an alternative
lightweight strategy called Paired-Point Lifting (PPL) for
constructing 3D line clouds. Instead of drawing one ran-
domly oriented line per 3D point, PPL splits 3D points into
pairs and joins each pair to form 3D lines. This seemingly
simple strategy yields 3 benefits, i) new ambiguity in fea-
ture selection, ii) increased line cloud sparsity and iii) non-
trivial distribution of 3D lines, all of which contributes to
enhanced protection against privacy attacks. Extensive ex-
perimental results demonstrate the strength of PPL in con-
cealing scene details without compromising localization ac-
curacy, unlocking the true potential of 3D line clouds.
|
1. Introduction
Visual localization is the fundamental problem of esti-
mating the camera pose from an input image with respect to
a known 3D scene. It plays a crucial role in many com-
puter vision and robotics applications, including human-
computer interaction based on augmented or mixed real-
ity (AR/MR), 3D reconstruction via structure-from-motion
(SfM) [1, 36, 37, 39] and autonomous navigation systems in
drones, self-driving vehicles, or robots using simultaneous
localization and mapping (SLAM) [5, 22, 26].
To this date, many practical visual localization algo-
rithms are still structure-based approaches [12], utilizing a
global sparse 3D model of the scene obtained from SfM
or SLAM. In such approaches, 2D-3D correspondences
are formed between 2D image points and the global 3D
*Corresponding author
(a) Point cloud
(b) Line cloud [40]
(c)PPL (ours)
Figure 1. Visualization of different 3D scene representations and
respective image reconstruction results using InvSfM [30]. Images
in (b) and (c) are reconstructed by estimating the 3D point clouds
from line clouds via [3]. While images revealed from original line
clouds (OLC) [40] still contain basic scene details, those from the
proposed method (PPL) exhibit much degraded scene quality.
structure by comparing feature descriptors (e.g., SIFT [21],
ORB [34], or other alternatives [25, 43]), after which they
are used to perform robust camera pose estimation based on
geometric constraints and RANSAC [2, 8, 31]. While most
research over the last decade have made significant progress
in improving algorithm accuracy, scalability [19, 23, 35, 47]
and efficiency [12,20], the requirement that sparse 3D point
cloud and associated feature descriptors need to be persis-
tently accessible has largely remained unchanged.
Recently, it caught the research community by surprise
when Pittaluga et al. [30] showed sparse 3D point clouds
can retain enough scene details such as characters and tex-
tures that can be used to uncover a high-fidelity image of the
scene. Since this process only requires 2D projection of 3D
points and their respective feature descriptors, this raised
privacy concerns related to uploading sparse 3D models in
the cloud or storing them on the end-user device, both of
which are commonly found settings in visual localization.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17266
(a) Standard lifting with uniformly-distributed line directions [40]
(b) Paired-point lifting (PPL) (ours)
Figure 2. Illustration of the previous standard lifting approach (also referred to as the original line cloud (OLC)) [40] and PPL (ours) for
constructing 3D line clouds from sparse 3D point clouds. In (a), each line passes through one 3D point with its direction uniformly sampled
on a unit sphere, carrying one descriptor. In (b), each line passes through two 3D points selected at random, carrying two descriptors.
One of the most notable approaches for addressing above
privacy attack on point clouds is geometric lifting, whereby
each 3D point is transformed into a randomly-oriented 3D
line passing through the respective point [40]. By sam-
pling each line direction uniformly on the unit sphere, the
approach aims to conceal the scene geometry and prevent
meaningful 2D projections of the sparse point cloud.
Nevertheless, a recent work by Chelani et al. [3] showed
3D lines whose directions are uniformly drawn from a unit
sphere carry an unintended statistical characteristic that the
original 3D points are likely to be near the high-density re-
gions of the closest points between lines. This property can
be exploited to reveal the scene geometry (see §2), and to
the best of our knowledge, the only suggested option for
mitigating this attack so far is to use sparser point clouds,
which comes at the cost of reduced localization accuracy.
In this work, we argue that the privacy-preserving prop-
erty of 3D line clouds can be enhanced by applying a differ-
ent line-construction approach. Specifically, we propose to
yield 3D lines by joining random pairs of 3D points. This
approach, which we call Paired-Point Lifting (PPL) , is mo-
tivated by the three key observations listed below:
1.confusion over feature descriptors: each line carries
two feature descriptors, so assigning the correct de-
scriptors to each of two 3D points requires guesswork,
2.line cloud sparsification: PPL results in 50% more
sparse line clouds, thus degrading the accuracy of 3D
scene point recovery using [3], which relies on a suffi-
cient number of neighboring lines, and
3.non-trivial distribution of lines: the finding from [3]
that the original 3D point is likely to be near the high-
density region of closest points to other lines is plagued
by the presence of extra 3D point in each line and non-
uniformly distributed line directions.
We illustrate in §3 and §5 that each of the above factors
adds a layer of difficulty in revealing the scene geometry
and image-level details, synergistically enhancing privacy-
preserving visual localization (partly shown in Fig. 1) with-
out compromising camera localization accuracy.Overall, our contributions can be summarized as follows:
+ a new strategy called paired-point lifting (PPL) for
constructing 3D line clouds with each line concealing
two 3D points from the respective sparse point clouds,
+ careful empirical analysis of success factors in PPL
through an ablation study using synthetic and real data,
+ improving the 3D point recovery algorithm in [3] to
allow handling PPL-based line clouds and locate two
3D points in each line for a fairer comparison,
+ a subsequent upgrade of PPL called PPL+ to address
potential drawback of PPL with planar scenes, and
+ extensive experimental evaluation of both PPL and
PPL+ against other baselines on a range of public
datasets using different point recovery techniques.
|
Kollias_Multi-Label_Compound_Expression_Recognition_C-EXPR_Database__Network_CVPR_2023
|
Abstract
Research in automatic analysis of facial expressions
mainly focuses on recognising the seven basic ones. How-
ever, compound expressions are more diverse and represent
the complexity and subtlety of our daily affective displays
more accurately. Limited research has been conducted for
compound expression recognition (CER), because only a
few databases exist, which are small, lab controlled, im-
balanced and static. In this paper we present an in-the-
wild A/V database, C-EXPR-DB, consisting of 400 videos
of 200K frames, annotated in terms of 13 compound expres-
sions, valence-arousal emotion descriptors, action units,
speech, facial landmarks and attributes. We also propose
C-EXPR-NET, a multi-task learning (MTL) method for CER
and AU detection (AU-D); the latter task is introduced to en-
hance CER performance. For AU-D we incorporate AU se-
mantic description along with visual information. For CER
we use a multi-label formulation and the KL-divergence
loss. We also propose a distribution matching loss for cou-
pling CER and AU-D tasks to boost their performance and
alleviate negative transfer (i.e., when MT model’s perfor-
mance is worse than that of at least one single-task model).
An extensive experimental study has been conducted illus-
trating the excellent performance of C-EXPR-NET, vali-
dating the theoretical claims. Finally, C-EXPR-NET is
shown to effectively generalize its knowledge in new emo-
tion recognition contexts, in a zero-shot manner.
|
1. Introduction
For the past twenty years research in automatic analysis
of facial behaviour was mainly limited to the recognition of
the so-called six universal expressions (e.g., anger, happi-
ness), plus the neutral state, influenced by the seminal work
of Ekman [7]. However, the affect model based on basic ex-
pressions is limited in the ability to represent the complexity
and subtlety of our daily affective displays [19]. Many more
facial expressions exist and are used regularly by humans.
The compound expressions are a better representation of af-
fective displays in everyday interactions. Compound meansthat the expression category is constructed as a combination
of two basic expression categories. Obviously, not all com-
binations are meaningful for humans. Twelve compound
expressions are most typically expressed by humans, e.g.,
people regularly produce a happily surprised expression and
observers do not have any problem distinguishing it from an
angrily surprised expression.
The design of systems capable of understanding the
community perception of expressional attributes and af-
fective displays is receiving increasing interest. Benefited
from the great progress in deep learning research, the per-
formance of expression recognition has greatly improved.
However, deep-model based methods are starved for labeled
data, whereas the annotation is a highly labor intensive and
time consuming process and the complexity of expression
categories obscures the labelling procedure. Initially, re-
search was mainly limited to posed behavior captured in
highly controlled conditions. Some representative datasets
are CK+ [17], MMI [26], CFEE [6] and iCV-MEFED [18].
However, it is now widely accepted that progress in
a particular application domain is significantly catalysed
when a large number of datasets are collected in-the-wild
(i.e., in unconstrained conditions). Thus, expression analy-
sis could not only focus on spontaneous behaviors, but also
on behaviours captured in unconstrained conditions. Hence,
two in-the-wild databases have been generated, EmotioNet
[1] and RAF-DB [15]. These databases, although in-the-
wild, are: i) very small in terms of size (RAF-DB contains
around 4,000 images; EmotioNet contains around 1,500 im-
ages); ii) very imbalanced (in RAF-DB one category con-
sists of 1,700 images; in EmotioNet one category contains
half the samples); iii) static (i.e., they contain only images);
iv) lacking a training-validation-test set split.
It is evident that these databases are very small and do
not contain sufficient data for both training and evaluating
deep learning systems, so that the results are meaningful
and illustrate good generalization. Compound expression
recognition (CER) is in its infancy due to the above limita-
tions. To this end, we collected the largest, diverse, in-the-
wild audiovisual database, C-EXPR-DB, reliably annotated
for 12 compound expressions plus a category referring to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5589
other affective states. C-EXPR-DB is also annotated for:
i) continuous dimensions of valence-arousal (how positive-
negative, active-passive the emotional state is); ii) speech
detection; iii) facial landmarks and bounding boxes; iv) ac-
tion units (activation of facial muscles); v) facial attributes.
Recently, some works have utilized multi-task learning
(MTL) for basic expression recognition and AU detection
(AU-D) [2, 3]. They have shown that MTL helps improve
the performance across tasks. Inspired by this, we proposed
a novel methodology, C-EXPR-NET, for MTL of CER and
AU-D; we are interested in CER but we use AU-D as auxil-
iary task to enhance CER performance. We utilize SEV-Net
[30] for AU-D. The FACS manual [7] provides a complete
set of textual descriptions for AU definitions; such a set of
AU descriptions provides rich semantic information (about
facial area/position, action, motion direction/intensity, rela-
tion of AUs). The model introduces such AU semantic de-
scriptions as auxiliary information and processes them via
inter- and intra-transformer modules and a cross modality
attention module for AU-D.
For CER we use a multi-label formulation, where the
output classes are 6 (the basic expressions) and each da-
tum contains annotations for 2 of these 6 categories. We
use a softmax activation in the output layer of our method
that tackles CER and the Kullback-Leibler divergence (KL-
div) as its loss function. The concept for this formulation
aligns with what compound expressions are, i.e., expres-
sions that can be constructed as combinations of basic cat-
egories. In addition, this formulation allows our method to
be additionally trained with images annotated with the 6 ba-
sic expressions (as a form of data augmentation or to learn
to differentiate between basic expressions as well). This
formulation deviates from traditional CER approaches that
use a multi-class formulation, with the compound expres-
sions being mutually exclusive classes (i.e., each datum is
annotated in terms of only one compound expression).
However, when we compared the performance of our
multi-task method with that of single-task (ST) methods for
CER and AU-D, we observed that MTL increased CER per-
formance, but it harmed AU-D performance. Thus negative
transfer occurred, as CER task dominated the training pro-
cess. Inspired by [11, 12], we propose a distribution match-
ing approach based on task relatedness, i.e., knowledge ex-
change between CER and AU-D tasks is enabled via dis-
tribution matching over their predictions. We demonstrate
empirically that this distribution matching approach allevi-
ates negative transfer and further boosts CER performance.
The main contributions of this work are summarized below:
• We generate the largest, diverse, in-the-wild A/V
database, C-EXPR-DB, annotated for compound expres-
sions, valence-arousal, AUs, facial attributes, speech de-
tection, facial landmarks and bounding boxes;
• We propose the novel C-EXPR-NET, a MT method forCER and AU-D; the latter task acts as an auxiliary one for
enhancing the former task’s performance (we are the first
to prove this). For AU-D, our method incorporates visual
information, as well as AU descriptors (that act as auxil-
iary, rich, semantic information) and processes them via
inter- and intra-transformer modules and a cross-modality
attention module. For CER our method uses a multi-label
formulation and KL-div loss. Our method finally contains
a distribution matching loss, based on task relatedness,
for coupling the tasks to alleviate negative transfer and
further boost their performance.
• We conduct an extensive experimental study which shows
that: i) C-EXPR-NET outperforms the state-of-the-art
(sota) both for CER and BER on RAF-DB, regardless
if trained from scratch or pre-trained on C-EXPR-DB;
ii) C-EXPR-NET outperforms the sota regardless if AU
annotations are manual or automatic; iii) C-EXPR-NET
can effectively generalize its knowledge in new emotion
recognition contexts, in a zero-shot manner.
|
Li_Learning_Distortion_Invariant_Representation_for_Image_Restoration_From_a_Causality_CVPR_2023
|
Abstract
In recent years, we have witnessed the great advance-
ment of Deep neural networks (DNNs) in image restoration.
However, a critical limitation is that they cannot general-
ize well to real-world degradations with different degrees
or types. In this paper, we are the first to propose a novel
training strategy for image restoration from the causality
perspective, to improve the generalization ability of DNNs
for unknown degradations. Our method, termed Distortion
Invariant representation Learning (DIL), treats each distor-
tion type and degree as one specific confounder, and learns
the distortion-invariant representation by eliminating the
harmful confounding effect of each degradation. We de-
rive our DIL with the back-door criterion in causality by
modeling the interventions of different distortions from the
optimization perspective. Particularly, we introduce coun-
terfactual distortion augmentation to simulate the virtual
distortion types and degrees as the confounders. Then, we
instantiate the intervention of each distortion with a virtual
model updating based on corresponding distorted images,
and eliminate them from the meta-learning perspective. Ex-
tensive experiments demonstrate the generalization capa-
bility of our DIL on unseen distortion types and degrees.
Our code will be available at https://github.com/
lixinustc/Causal-IR-DIL .
|
1. Introduction
Image restoration (IR) tasks [7, 8, 32, 51], including
image super-resolution [11, 24, 43, 46, 56, 57], deblur-
ring [41, 75], denoising [3, 23, 40], compression artifacts
removal [28, 53], etc, have achieved amazing/uplifting per-
formances, powered by deep learning. A series of back-
bones are elaborately and carefully designed to boost the
†Corresponding Author
ERM
DIL
22
24
26
28
30
32
34
36
38
40
PSNR(dB)
36.23
30.18
26.30
24.00
36.10
33.97
31.95
30.16
=
2
0
(
s
e
e
n
)
=
3
0
(
u
n
s
e
e
n
)
=
4
0
(
u
n
s
e
e
n
)
=
5
0
(
u
n
s
e
e
n
)
Figure 1. A comparison between ERM and our DIL with RRDB
as backbone. The results are tested on Set5 with Gaussian noise.
restoration performances for specific degradation. Convolu-
tion neural networks (CNNs) [20] and transformers [12,34]
are two commonly-used designed choices for the backbones
of image restoration. However, these works inevitably suf-
fer from severe performance drops when they encounter un-
seen degradations as shown in Fig. 1, where the restora-
tion degree in training corresponds to the noise of standard
deviation 20 and the degrees in testing are different. The
commonly-used training paradigm in image restoration, i.e.,
empirical risk minimization (ERM), does not work well for
out-of-distribution degradations. Particularly, the restora-
tion networks trained with ERM merely mine the correla-
tion between distorted image Idand its ideal reconstructed
image Ioby minimizing the distance between Ioand the
corresponding clean image Ic. However, a spurious corre-
lation [44] is also captured which introduces the bad con-
founding effects of specific degradation d. It means the
conditional probability P(Io|Id)is also conditioned on the
distortion types or degrees d(i.e.,d̸⊥ ⊥Io|Id).
A robust/generalizable restoration network should be
distortion-invariant ( i.e.,d⊥ ⊥Io|Id). For instance, given
two distorted images with the same content Icbut differ-
ent degradations d1andd2, the robust restoration network
is expected to recover the same reconstructed image Io
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1714
from these two distorted images ( i.e.,P(Io|Id, d=d1) =
P(Io|Id, d=d2)), respectively. Previous works for the ro-
bustness of the restoration network can be roughly divided
into two categories, distortion adaptation-based schemes,
and domain adaptation/translation-based schemes. Distor-
tion adaptation-based schemes [60] aim to estimate the dis-
tortion types or representations, and then, handle the differ-
ent distortions by adaptively modulating the restoration net-
work. Domain adaptation/translation-based schemes [13,
35,48] regard different distortions as different domains, and
introduce the domain adaptation/translation strategies to the
image restoration field. Notwithstanding, the above works
ignore the exploration of the intrinsic reasons for the poor
generalization capability of the restoration network. In this
paper, we take the first step to the causality-inspired im-
age restoration, where novel distortion invariant represen-
tation learning from the causality perspective is proposed,
to improve the generalization capability of the restoration
network.
As depicted in [17, 44], correlation is not equivalent to
causation . Learning distortion invariant representation for
image restoration requires obtaining the causal effects be-
tween the distorted and ideal reconstructed images instead
of only their correlation. There are two typical adjust-
ment criteria for causal effects estimation [17], the back-
door criterion, and the front-door criterion, respectively. In
particular, the back-door criterion aims to remove the bad
confounding effects by traversing over known confounders,
while the front-door criterion is to solve the challenge that
confounders cannot be identified. To improve the gener-
alization capability of the restoration network, we build
a structural causal graph in Fig. 2 for the image restora-
tion process and propose the Distortion- Invariant represen-
tation Learning (DIL) for image restoration by implement-
ing the back-door criterion from the optimization perspec-
tive. There are two challenges for achieving this. The first
challenge is how to construct the confounder sets ( i.e.,dis-
tortion sets). From the causality perspective [17, 44], it is
better to keep other factors in the distorted image invari-
ant except for distortion types. However, in the real world,
collecting distorted/clean image pairs, especially with dif-
ferent real distortions but the same content is impractical.
Inspired by counterfactual [44] in causality and the distor-
tion simulation [55, 71], we propose counterfactual distor-
tion augmentation, which selects amounts of high-quality
images from the commonly-used dataset [2, 49], and simu-
late the different distortion degrees or types on these images
as the confounders.
Another challenge of implementing DIL stems from
finding a stable and proper instantiating scheme for the
back-door criterion. Previous works [36,37,54,64,65] have
incorporated causal inference in high-level tasks by instan-
tiating the back-door criterion [17] with attention interven-tion [64], feature interventions [66], etc, which are arduous
to be exploited in the low-level task of image restoration. In
this work, we theoretically derive our distortion-invariant
representation learning DIL by instantiating the back-door
criterion from the optimization perspective. Particularly,
we model the intervention of simulated distortions for the
restoration process by virtually updating the restoration net-
work with the samples from the corresponding distortions.
Then, we eliminate the confounding effects of distortions by
introducing the optimization strategy from Meta-Learning
to our proposed DIL. In this way, we can instantiate the
causal learning in image restoration and enable the DIL
based on the back-door criterion.
The contributions of this paper are listed as follows:
• We revisit the image restoration task from a causality
view and pinpoint that the reason for the poor gener-
alization of the restoration network, is that the restora-
tion network is not independent to the distortions in the
training dataset.
• Based on the back-door criterion in causality, we pro-
pose a novel training paradigm, Distortion Invariant
representation Learning ( DIL) for image restoration,
where the intervention is instantiated by a virtually
model updating under the counterfactual distortion
augmentation and is eliminated with the optimization
based on meta-learning.
• Extensive experiments on different image restoration
tasks have demonstrated the effectiveness of our DIL
for improving the generalization ability on unseen dis-
tortion types and degrees.
|
Kim_HIER_Metric_Learning_Beyond_Class_Labels_via_Hierarchical_Regularization_CVPR_2023
|
Abstract
Supervision for metric learning has long been given in
the form of equivalence between human-labeled classes. Al-
though this type of supervision has been a basis of metric
learning for decades, we argue that it hinders further ad-
vances in the field. In this regard, we propose a new regular-
ization method, dubbed HIER, to discover the latent seman-
tic hierarchy of training data, and to deploy the hierarchy
to provide richer and more fine-grained supervision than
inter-class separability induced by common metric learn-
ing losses. HIER achieves this goal with no annotation for
the semantic hierarchy but by learning hierarchical proxies
in hyperbolic spaces. The hierarchical proxies are learn-
able parameters, and each of them is trained to serve as
an ancestor of a group of data or other proxies to approxi-
mate the semantic hierarchy among them. HIER deals with
the proxies along with data in hyperbolic space since the
geometric properties of the space are well-suited to repre-
sent their hierarchical structure. The efficacy of HIER is
evaluated on four standard benchmarks, where it consis-
tently improved the performance of conventional methods
when integrated with them, and consequently achieved the
best records, surpassing even the existing hyperbolic metric
learning technique, in almost all settings.
|
1. Introduction
Learning a discriminative and generalizable embedding
space has been a vital step within many machine learning
tasks including content-based image retrieval [18, 26, 36,
37], face verification [20, 33], person re-identification [6,
50], few-shot learning [29, 35, 39], and representation
learning [18, 45, 53]. Deep metric learning has aroused lots
of attention as an effective tool for this purpose. Its goal
is to learn an embedding space where semantically simi-
lar samples are close together and dissimilar ones are far
apart. Hence, the semantic affinity between samples serves
as the main supervision for deep metric learning, and has
long been given in the form of equivalence between their
human-labeled classes.
HIER
Figure 1. Motivation of HIER. HIER aims to discover an in-
herent but latent semantic hierarchy of data (colored dots on the
boundary) by learning hierarchical proxies (larger black dots) in
hyperbolic space. The semantic hierarchy is deployed to provide
rich and fine-grained supervision that cannot be derived only by
human-labeled classes.
Although this type of supervision has been a basis of
metric learning for decades, we argue that it now hinders
further advances of the field. The equivalence of human-
labeled classes deals with only a tiny subset of possible re-
lations between samples due to the following two reasons.
First, the class equivalence is examined at only a single
fixed level of semantic hierarchy, although different classes
can be semantically similar if they share the same super-
class. Second, the equivalence is a binary relation that ig-
nores the degree of semantic affinity between two classes.
It is difficult to overcome these two limitations since the se-
mantic hierarchy of data is latent and only human-labeled
classes are available from existing datasets in the standard
metric learning setting. However, once they are resolved,
one can open up the possibility of providing rich supervi-
sion beyond human-labeled classes.
In this regard, we propose a new regularization method,
called HIErarchical Regularization (HIER), to discover and
deploy the latent semantic hierarchy of training data for
metric learning. Since the semantic hierarchy can capture
not only the predefined set of classes but also their sub-
classes, super-classes, and affinities between them, our reg-
ularizer is expected to provide richer and more fine-grained
supervision beyond inter-class separability induced by com-
mon metric learning losses. However, it is challenging
to establish such a semantic hierarchy of data given their
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19903
human-labeled classes only due to the absence of annota-
tion for the semantic hierarchy.
HIER tackles this challenge by learning hierarchical
proxies inhyperbolic space . The hierarchical proxies are
learnable parameters, and each of them is trained to serve
as an ancestor of a group of data or other proxies to approx-
imate the semantic hierarchy. Also, HIER deals with the
proxies in hyperbolic space since the space is well-suited
to represent hierarchical structures of the proxies and data.
It has been reported in literature that Euclidean space with
zero curvature is not optimal for representing data exhibit-
ing such a semantic hierarchy [16]. In contrast, hyperbolic
space with constant negative curvature can represent the se-
mantic hierarchy of data effectively using relatively small
dimensions since its volume increases exponentially as its
Poincar ´e radius increases [31, 32].
To be specific, HIER is designed as a soft approxima-
tion of hierarchical clustering [7, 25, 43], where similar data
should be near by one another in a tree-structured hierarchy
while dissimilar ones should be placed in separate branches
of the hierarchy (See Figure 1). During training, HIER con-
structs a triplet of samples or proxies, in which two of them
are similar and the other is dissimilar based on their hyper-
bolic distances. Then, the proxy closest to the entire triplet
is considered as the lowest common ancestor (LCA) of the
triplet in a semantic hierarchy; likewise, we identify another
proxy as the LCA of only the similar pair in the same man-
ner. Given the triplet and two LCAs (proxies), HIER en-
courages that each of the LCAs and its associated members
of the triplet are close together and that the dissimilar one of
the triplet is far apart from the LCA of the similar pair. This
allows the hierarchical proxies to approximate a semantic
hierarchy of data in the embedding space, without any off-
the-shelf module for pseudo hierarchical labeling [51].
The major contribution of our work from the perspective
of conventional metric learning is three-fold:
• We study a new and effective way of providing seman-
tic supervision beyond human-labeled classes, which
has not been actively studied in metric learning [13,
24, 30, 51, 52].
• HIER consistently improved performance over the
state-of-the-art metric learning losses when integrated
with them, and consequently achieved the best records
in almost all settings on four public benchmarks.
• By taking advantage of hyperbolic space, HIER sub-
stantially improved performance particularly on low-
dimensional embedding spaces.
Also, when regarding our work as a hyperbolic metric learn-
ing method, a remarkable contribution of HIER is that it al-
lows taking full advantage of both hyperbolic embeddingspace and the great legacy of conventional metric learn-
ing since it can be seamlessly incorporated with any metric
learning losses based on spherical embedding spaces. The
only prior studies on hyperbolic metric learning [11, 51] are
not compatible with conventional losses for metric learn-
ing since they learn a distance metric directly on hyperbolic
space, in which, unlike the spherical embedding spaces,
norms of embedding vectors vary significantly.
|
Lee_Decompose_Adjust_Compose_Effective_Normalization_by_Playing_With_Frequency_for_CVPR_2023
|
Abstract
Domain generalization (DG) is a principal task to eval-
uate the robustness of computer vision models. Many previ-
ous studies have used normalization for DG. In normaliza-
tion, statistics and normalized features are regarded as style
and content, respectively. However, it has a content vari-
ation problem when removing style because the boundary
between content and style is unclear. This study addresses
this problem from the frequency domain perspective, where
amplitude and phase are considered as style and content,
respectively. First, we verify the quantitative phase varia-
tion of normalization through the mathematical derivation
of the Fourier transform formula. Then, based on this, we
propose a novel normalization method, PCNorm , which
eliminates style only as the preserving content through spec-
tral decomposition. Furthermore, we propose advanced
PCNorm variants, CCNorm andSCNorm , which ad-
just the degrees of variations in content and style, respec-
tively. Thus, they can learn domain-agnostic representa-
tions for DG. With the normalization methods, we propose
ResNet-variant models, DAC-P and DAC-SC, which are ro-
bust to the domain gap. The proposed models outperform
other recent DG methods. The DAC-SC achieves an aver-
age state-of-the-art performance of 65.6% on five datasets:
PACS, VLCS, Office-Home, DomainNet, and TerraIncog-
nita.
|
1. Introduction
Deep learning has performed remarkably well in various
computer vision tasks. However, the performance decreases
when distribution-shifted test data are given [39]. As train-
ing and testing datasets are assumed to be identically and
independently distributed, common vision models are not
*Equal contribution
†Corresponding author
(a) Existing Normalization Method Compose (IFT)Normalization
Input
Content Changed OutputEliminated StyleChanged ContentContentStyleDecompose (FT)Normalization
(b) Our Normalization Method
Input
Content Preserved Output
Eliminated Style
ContentStyle
Normalization
AdjustingAdjustingFigure 1. Concepts of (a) the existing normalization and (b) the
proposed methods. Our methods prevent or adjust the content
change caused by existing normalization using spectral decom-
position. The solid line marks the feedforward process and the
dashed line conceptually represents the content and style of the
feature. Red-colored star and doughnut in (b) indicate the content
and style adjusting terms, respectively.
as robust as the human vision system, which is not confused
by affordable changes in the image style [14]. Domain gen-
eralization (DG) aims to learn models that are robust to the
gap between source domain and unseen target domain to
address this problem [49]. Moreover, DG is challenging be-
cause models should learn domain-irrelevant representation
in an unsupervised manner.
The style-based approach is widely studied for DG,
which defines the domain gap as the difference in style
[19, 31, 51, 56, 57]. Typically, normalization methods, such
as batch normalization (BN) [18], layer normalization (LN)
[18], and instance normalization (IN) [45], which are well-
known in style transfer, are used in this approach. Normal-
ization statistics contain style information, and normaliza-
tion can successfully extract the style from a feature. How-
ever, the content is also changed when the style is elimi-
nated [15, 19, 34].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11776
Moreover, another method for style-based DG [7,48,50,
51] is the frequency domain-based method. Input images
are decomposed into amplitude and phase using the Fourier
transform (FT) [4]. The amplitude and phase are each re-
garded as the style and content of the input image, respec-
tively [7, 33, 37, 51, 53]. Each component is manipulated
independently to generate the style-transformed image. In
this context, this method has an advantage of the separation
between style and content [56]. Nevertheless, most previous
studies have applied it to just the input-level data augmen-
tation [7, 51, 53] for DG.
Thus, the normalization is expected to be complemented
by the frequency domain-based method if the method is also
applicable at the feature level. To identify the feasibility of
this, we conduct a style transfer experiment. We replace IN
in AdaIN [15], a milestone work that uses normalization in
style transfer, with spectral decomposition. The qualitative
results in Fig. 2 indicate that the frequency domain-based
method can work as a feature-level style-content separator
instead of normalization.
Motivated by this, we aim to overcome the content
change problem in normalization by combining the normal-
ization with spectral decomposition. The overall concept of
our proposed method is visualized in Fig. 1. For this, we
investigate the effect of the existing normalization in DG
from the standpoint of the frequency domain. We verify
how normalization transforms the content of a feature by
mathematically deriving the FT formula. This is the first to
present such an analysis.
Then, based upon the analysis, we introduce a
novel normalization method, phase-consistent normaliza-
tion ( PCNorm ), which preserves the content of a pre-
normalized feature. The PCNorm synthesizes a content-
invariant normalized feature by composing the phase of pre-
normalized feature and the amplitude of post-normalized
feature. The experimental results reveal the effectiveness
ofPCNorm in DG compared to existing normalization.
Along with the success of PCNorm , we take a step
further and propose two advanced PCNorm variants:
content-controlling normalization ( CCNorm ) and style-
controlling normalization ( SCNorm ). The main idea of
both methods is not to preserve the content or style but to
adjust the change in it. CCNorm andSCNorm regulate
the changes in content and style, respectively, so they can
synthesize more robust representations of the domain gap.
With the proposed normalization methods, we propose
ResNet [13] variant models, DAC-P and DAC-SC. DAC-
P is the initial model with PCNorm , and DAC-SC is the
primary model using CCNorm andSCNorm . In DAC-
P, the existing BN in the downsample layer is replaced with
PCNorm . In contrast, DAC-SC applies CCNorm instead
ofPCNorm , and SCNorm is inserted at the end of each
stage. We evaluate DAC-P and DAC-SC on five DG bench-
Figure 2. Examples of style transfer with spectral decomposition.
Only the amplitude of the target images are transferred instead of
their normalization statistics in AdaIN.
marks: PACS, VLCS, Office-Home, DomainNet and Ter-
raIncognita, and DAC-P outperforms other recent DG meth-
ods with average performance of 65.1 %. Furthermore, the
primary model, DAC-SC, achieves state-of-the-art (SOTA)
performance of 65.6 %on average, and displays the high-
est performance at 87.5 %, 70.3 %and 44.9 %on the PACS,
Office-Home, and DomainNet benchmarks.
The contributions of this paper are as follows:
• For the first time, we analyze the quantitative shift in
the phase caused by normalization using mathematical
derivation.
• We introduce a new normalization, PCNorm , which
can remove style only through spectral decomposition.
• We propose the advanced PCNorm variants,
CCNorm andSCNorm , which can learn domain-
agnostic features for DG by adjusting the degrees of
the changes in content and style, respectively.
• We propose ResNet-variant models, DAC-P and DAC-
SC, which applies our proposed normalization meth-
ods. We experimentally show that our methods are
effective for DG and achieve SOTA average perfor-
mances on five benchmark datasets.
|
Mehl_Spring_A_High-Resolution_High-Detail_Dataset_and_Benchmark_for_Scene_Flow_CVPR_2023
|
Abstract
While recent methods for motion and stereo estimation
recover an unprecedented amount of details, such highly
detailed structures are neither adequately reflected in the
data of existing benchmarks nor their evaluation methodol-
ogy. Hence, we introduce Spring – a large, high-resolution,
high-detail, computer-generated benchmark for scene flow,
optical flow, and stereo. Based on rendered scenes from
the open-source Blender movie “Spring”, it provides photo-
realistic HD datasets with state-of-the-art visual effects and
ground truth training data. Furthermore, we provide a web-
site to upload, analyze and compare results. Using a novel
evaluation methodology based on a super-resolved UHD
ground truth, our Spring benchmark can assess the quality
of fine structures and provides further detailed performance
statistics on different image regions. Regarding the num-
ber of ground truth frames, Spring is 60 ×larger than the
only scene flow benchmark, KITTI 2015, and 15 ×larger
than the well-established MPI Sintel optical flow bench-
mark. Initial results for recent methods on our benchmark
show that estimating fine details is indeed challenging, as
their accuracy leaves significant room for improvement.
The Spring benchmark and the corresponding datasets are
available at http://spring-benchmark.org.
|
1. Introduction
The estimation of dense correspondences in terms of
scene flow, optical flow and disparity is the basis for nu-
merous tasks in computer vision. Amongst others, such
tasks include action recognition, driver assistance, robot
navigation, visual odometry, medical image registration,
video processing, stereo reconstruction and structure-from-
motion. Given this multitude of applications and their fun-
damental importance, datasets and benchmarks that allow
quantitative evaluations have ever since driven the improve-
ment of dense matching methods. The introduction of suit-
able datasets and benchmarks did not only enable the com-
Rendered Image
1920×1080
Zoom 3.6 ×
Disparity
3840×2160
Zoom 7.2 ×
Optical Flow
3840×2160
Zoom 5.4 ×
Figure 1. Illustration of the high amount of details in the Spring
dataset. The dataset consists of HD images with super-resolved
UHD ground truth for disparities and optical flow.
parison and analysis of novel methods, but also triggered
the transition from classical discrete [12, 21, 47] and con-
tinuous [3, 4, 13, 15, 31] optimization frameworks to cur-
rent learning based approaches relying on neural networks
[7, 8, 25, 42, 45, 46, 53]. The available benchmarks focus on
distinct aspects like automotive scenarios [10, 22, 30, 34],
differing complexity of motion [1, 2, 5] or (un)controlled il-
lumination [35, 38]. However, none of these benchmarks
provides a combination of high-quality data and a large
number of frames, to assess a method’s quality in regions
with fine details and to simultaneously satisfy the training
needs of current neural networks. Furthermore, with KITTI
2015 [30], only a single benchmark that goes back to the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4981
Table 1. Overview over recent datasets and benchmarks (BM). Where applicable, we report available image pairs and ground truth frames
for motion estimation, i.e. scene flow (SF) or optical flow (OF), and for disparity estimation, i.e. stereo (ST), separately.
Venue SF OF ST BM #image pairs #gt frames #pix scenes source ph.realism motion
Spring (ours) CVPR ’23 ✓ ✓ ✓ ✓ 5953 6000 23812 12000 2.1M 47 CGI high realistic
KITTI 2015 [30] CVPR ’15 ✓ ✓ ✓ ✓ 400 400 400 400 0.5M n/a real high automotive
FlyingThings3D [28] CVPR ’16 ✓ ✓ ✓ ✗ 24084‡26760‡96336 53520 0.5M 2676 CGI low random
VKITTI 2 [6] arXiv ’20 ✓ ✓ ✓ ✗ 21210 21260 84840 42520 0.5M 5 CGI med. automotive
Monkaa [28] CVPR ’16 ✓ ✓ ✓ ✗ 8640‡8664‡34560 17328 0.5M 8 CGI low random
Driving [28] CVPR ’16 ✓ ✓ ✓ ✗ 4392‡4400‡17568 8800 0.5M 1 CGI med. automotive
KITTI 2012 [10] CVPR ’12 ✗ ✓ ✓ ✓ 389 389 389 389 0.5M n/a real high automotive
MPI Sintel [5] ECCV ’12 ✗ ✓ (✓)∗✓ 1593‡1064‡1593 1064 0.4M 35 CGI high realistic
HD1K [22] CVPRW ’16 ✗ ✓ ✗ (✓)†1074 n/a 1074 n/a 2.8M 63 real high automotive
VIPER [34] ICCV ’17 ✗ ✓ ✗ ✓ 186285 n/a 372570 n/a 2.1M 184 CGI high automotive
Middlebury-OF [2] IJCV ’11 ✗ ✓ ✗ ✓ 16 n/a 16 n/a 0.2M 16 HT/CGI med. small
Human OF [33] IJCV ’20 ✗ ✓ ✗ ✗ 238900 n/a 238900 n/a 0.4M 18432 CGI med. rand./human
AutoFlow [41] CVPR ’21 ✗ ✓ ✗ ✗ 40000 n/a 40000 n/a 0.3M n/a CGI low random
FlyingChairs [8] ICCV ’15 ✗ ✓ ✗ ✗ 22872 n/a 22872 n/a 0.2M n/a CGI low random
VKITTI [9] CVPR ’16 ✗ ✓ ✗ ✗ 21210 n/a 21210 n/a 0.5M 5 CGI low automotive
Middlebury-ST [35] GCPR ’14 ✗ ✗ ✓ ✓ n/a 33 n/a 66 5.6M 33 real high n/a
ETH3D [38] CVPR ’17 ✗ ✗ ✓ ✓ n/a 47 n/a 47 0.4M 11 real high n/a
HT: hidden texture, ‡: available in clean and final, ∗: not part of the benchmark, †: offline
pre-deep-learning era is available for image-based scene
flow, which currently prevents the development of well-
generalizing methods due to lacking dataset variability.
Contributions. To tackle these challenges, we propose the
Spring dataset and benchmark , providing a large number of
high-quality and high-resolution frames and ground truths
to enable the development of even more accurate methods
for scene flow, optical flow and stereo estimation. With
Spring, we complement existing benchmarks through a fo-
cus on high-detail data, while we simultaneously broaden
the number of available datasets for the development of
well-generalizing methods across data with varying prop-
erties. The latter aspect is particularly valuable for image-
based scene flow methods. There, we provide the first
benchmark with high-resolution, dense ground truth data in
the literature. In summary, our contributions are fourfold:
(i)New dataset: Based on the open-source Blender
movie “Spring”, we rendered 6000 stereo image pairs
from 47 sequences with state-of-the-art visual effects
in HD resolution (1920 ×1080px). For those image
pairs, we extracted ground truth from Blender in for-
ward and backward direction, both in space and time,
amounting to 12000 ground truth frames for stereo
and 23812 ground truth frames for motion – 60 ×more
than KITTI and 15 ×more than MPI Sintel.
(ii)High-detail evaluation methodology: To adequately
assess small details at a pixel level, we propose a novel
evaluation methodology that relies on an even higher
resolved ground truth. All ground truth frames are
computed in UHD resolution (3840 ×2160px).(iii) Benchmark: We set up a public benchmark website
to upload, analyze and compare novel methods. It
provides several widely used error measures and ad-
ditionally analyzes the results in different types of
regions, including high-detail, unmatched, non-rigid,
sky and large-displacement areas.
(iv) Baselines: We evaluated 15 state-of-the-art methods
(8 optical flow, 4 stereo, 3 scene flow) as non-fine-
tuned baselines. Results not only show that small de-
tails still pose a problem to recent methods, but also
hint at significant potential improvements in all tasks.
|
Pan_Fine-Grained_Image-Text_Matching_by_Cross-Modal_Hard_Aligning_Network_CVPR_2023
|
Abstract
Current state-of-the-art image-text matching methods
implicitly align the visual-semantic fragments, like regions
in images and words in sentences, and adopt cross-attention
mechanism to discover fine-grained cross-modal semantic
correspondence. However, the cross-attention mechanism
may bring redundant or irrelevant region-word alignments,
degenerating retrieval accuracy and limiting efficiency. Al-
though many researchers have made progress in mining
meaningful alignments and thus improving accuracy, the
problem of poor efficiency remains unresolved. In this work,
we propose to learn fine-grained image-text matching from
the perspective of information coding. Specifically, we sug-
gest a coding framework to explain the fragments aligning
process, which provides a novel view to reexamine the cross-
attention mechanism and analyze the problem of redundant
alignments. Based on this framework, a Cross-modal Hard
Aligning Network (CHAN) is designed, which comprehen-
sively exploits the most relevant region-word pairs and elim-
inates all other alignments. Extensive experiments con-
ducted on two public datasets, MS-COCO and Flickr30K,
verify that the relevance of the most associated word-region
pairs is discriminative enough as an indicator of the image-
text similarity, with superior accuracy and efficiency over
the state-of-the-art approaches on the bidirectional image
and text retrieval tasks. Our code will be available at
https://github.com/ppanzx/CHAN.
|
1. Introduction
With the rapid development of information technology,
multi-modal data, like texts, audio, images, and video, has
become ubiquitous in our daily life. It is of great value
to study multi-modal learning to give computers the ability
to process and relate information from multiple modalities.
Among the tasks of multi-modal learning, image-text re-
trieval is the most fundamental one, which paves the way for
more general cross-modal retrieval, namely, implementing
a retrieval task across different modalities, such as video-
*Corresponding author
Figure 1. Illustration of different semantic corresponding methods:
(a) Global Embedding methods, (b) Fragment Embedding meth-
ods, (c) existing Fragment Aligning methods, and (d) our CHAN
method. Here ωin (c) and (d) is the attention weight/assignment
between the word "pajamas" and the image region, where the re-
gion with the maximum attention weight is outlined in yellow
below. Compared to existing Fragment Aligning methods which
bring redundant alignments, we improve them by attending to the
most relevant region while neglecting all of the misalignments.
text and audio-text. Image-text retrieval has attracted broad
attention in recent years [ 12,21,23]; yet, the key challenges,
i.e., bridging the inter-modality gap and achieving the se-
mantic correspondence across modalities, are far from be-
ing resolved. A good alignment directly links to correctly
measuring the similarity between images and texts.
Early works usually adopt the intuitive idea of global
embedding to find the semantic correspondence between a
whole picture and the complete sentence [ 12]. By project-
ing the overall image and text into a common embedding
space, the similarity between heterogeneous samples is mea-
sured for the subsequent matching of the two modalities, as
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19275
shown in Figure 1(a). However, such a global embedding
method often induces background noise, which impedes the
correct image-text matching. Recent works have focused on
essential fragments [ 4,16,22], such as salient objects in im-
ages and keywords in texts, aiming to reduce contributions
of uninterested regions as well as irrelevant conjunctions.
By introducing the self-attention mechanism, the represen-
tation of holistic inputs is replaced by a weighted sum of
the local fragments, thereby easing the matching obstacles
caused by the noise parts, as shown in Figure 1(b). However,
these fragment embedding methods do not explicitly imple-
ment fine-grained aligning, as they only focus on the com-
plex aggregation of fragments in a single modality without
taking account of correctly learning granular cross-modal
semantic consistency.
Based on the consensus that overall image-text similar-
ity is a complex aggregation of the local similarities cal-
culated between the cross-modal fragments [ 19], the frag-
ments aligning method emphasizes the aggregation of the
local similarities rather than the aggregation of the local rep-
resentations. SCAN [ 21] and its variants [ 10,25,43,46] are
the representatives of this school of thought, which align im-
age regions and sentence words by locally associating visual
semantics, and integrate the semantic similarities between
relevant region-word pairs to measure the overall image-
text relevance. Specifically, with the core idea of the cross-
attention mechanism, they attend to the fragments related
to each query fragment from another modality, thus mak-
ing the semantically consistent granular pairs significantly
contribute to the final image-text similarity, and at the same
time eliminating or weakening the influence of inconsistent
pairs.
However, there are two problems associated with the
previous fragments aligning methods: (1) redundant align-
ments are detrimental to retrieval accuracy. Selecting se-
mantically consistent region-word alignments and rejecting
inconsistent alignments is the key to realizing fine-grained
image-text matching. However, though semantically con-
sistent alignments can be discovered by the cross-attention
mechanism, it is far from enough to achieve an accurate re-
trieval because these meaningful alignments will be more or
less disturbed by other attended fragments irrelevant to the
shared semantics. As illustrated in Figure 1(c), given a text
fragment "pajamas," current cross-attention-based methods
not only attend to the most matched image region but also
refer to other regions not exactly relevant, like "cat" and
"towel," which will incorrectly estimate the affinity between
"pajamas" and irrelevant regions while training. As a result,
semantically inconsistent region-word pairs will eventually
overwhelm those matched ones, thus compromising the ef-
fect from the most matched pairs and degenerating the final
performance; (2) caching cross-attention weights is with a
massive cost of memory and time. When the cross-attentionmechanism is applied to fragments aligning, it is inevitable
to calculate the affinities between all the cross-modal frag-
ments, because a query needs to be reconstructed with the
attention weights derived from the affinities, which incurs
huge memory consumption to store the attention weights.
In fact, due to the limited memory, the matching process
between each query text/image and the whole image/text
set requires a large number of iterations, resulting in a long
retrieval time and thus compromising the practical applica-
tions of the fragments aligning method.
Inspired by the coding idea widely adopted in content-
based image retrieval tasks [ 14,17,32], we propose a cod-
ing framework to explain the aligning process and rethink
cross-attention-based methods from the view of soft assign-
ment coding. Specifically, we regard each word in a sen-
tence as a query and represent the salient regions in an im-
age as a codebook. Therefore, the aligning of fragments
is expressed as an adjustment of the measure of the rela-
tionship between query words and visual codewords. The
overall image-text similarity is the aggregation of similari-
ties between all queries and all codewords. In this view, the
definition of attention weights in a cross-attention mecha-
nism is almost the same as assignments in soft assignment
coding [ 14] scheme, and thus the cross-attention mecha-
nism can be explained as a kind of soft assignment cod-
ing method. Based on the assumption that there must ex-
ist a sub-region in an image which can best describe every
given word in the semantically consistent sentence [ 19], we
deem it unnecessary to consider all or even a selected part of
codewords since most of them do not bring benefit for bet-
ter describing the query words but lowering the efficiency.
This insight inspires switching the methodology from soft
assignment coding to hard assignment coding [ 27], with
attention to the most relevant word-region/query-codeword
pair which is a more accurate indication of semantic consis-
tency between a word and an image, as shown in Figure 1(d).
We further propose a novel Cross-modal Hard Aligning Net-
work (CHAN) for fine-grained image-text matching. Our
scheme not only discards redundant alignments and better
discovers the granular semantic correspondence, but also re-
lieves the costly dense cross-attention matching, thus signif-
icantly improving cross-attention baselines both in accuracy
and efficiency. Our main contributions can be summarized
as follows:
•We propose a coding framework to explain fragments
aligning for image-text retrieval and subsequently elab-
orate on the aligning process of cross-attention mecha-
nism. This elaboration allows us to pinpoint the deficien-
cies, and propose an improved hard assignment coding
scheme.
•With the hard assignment coding scheme, we propose
a novel Cross-modal Hard Aligning Network (CHAN),
which can accurately discover the shared semantics of im-
19276
age and text by mining the informative region-word pairs
and rejecting the redundant or irrelevant alignments.
•Extensive experiments on two benchmarks, i.e.,
Flickr30K [ 45] and MS-COCO [ 5], showing the su-
periority of CHAN in both accuracy and efficiency
compared with state-of-the-art methods.
|
Qian_Adaptive_Data-Free_Quantization_CVPR_2023
|
Abstract
Data-free quantization (DFQ) recovers the performance
of quantized network (Q) without the original data, but gen-
erates the fake sample via a generator (G) by learning from
full-precision network (P), which, however, is totally inde-
pendent of Q, overlooking the adaptability of the knowledge
from generated samples, i.e., informative or not to the learn-
ing process of Q, resulting into the overflow of generaliza-
tion error. Building on this, several critical questions —
how to measure the sample adaptability to Q under varied
bit-width scenarios? whether the largest adaptability is the
best? how to generate the samples with adaptive adapt-
ability to improve Q’s generalization? To answer the above
questions, in this paper, we propose an Ada ptive D ata-F ree
Quantization (AdaDFQ) method, which revisits DFQ from
a zero-sum game perspective upon the sample adaptability
between two players — a generator and a quantized net-
work. Following this viewpoint, we further define the dis-
agreement and agreement samples to form two boundaries,
where the margin between two boundaries is optimized to
adaptively regulate the adaptability of generated samples
to Q, so as to address the over-and-under fitting issues. Our
AdaDFQ reveals: 1) the largest adaptability is NOT the
best for sample generation to benefit Q’s generalization; 2)
the knowledge of the generated sample should not be in-
formative to Q only, but also related to the category and
distribution information of the training data for P . The the-
oretical and empirical analysis validate the advantages of
AdaDFQ over the state-of-the-arts. Our code is available
at https://github.com/hfutqian/AdaDFQ.
|
1. Introduction
Deep Neural Networks (DNNs) have encountered great
challenges when applied to the resource-constrained de-
vices, owing to the increasing demands for computing and
storage resources. Network quantization [6, 10], a promis-
ing approach to improve the efficiency of DNNs, reduces
Yang Wang is the corresponding author.
Testing for GDFQ Training for AdaDFQ
Testing for AdaDFQ
3
2456Classification loss7
Epoch0 80 160 240 3208
1
023456
Epoch0 50 100 150 200 250 300 350
Generated samples with
desirable adaptabilityGenerated samples
Generated samples with
desirable adaptabilityGenerated samplesTraining for GDFQ
(a) 3-bit precision (b) 5-bit precision12346
Iteration
(a) 5.65.86.26.6
Iteration6.8
05
(a1) 3-bit precision (a2) 5-bit precision6.06.4
DisagreementGDFQ Qimera
AdaDFQ Generation process Calibration process
7.0
Epoch0 80 160 240 3201
023467
5
4
3567Classification loss8
Epoch0 50 150 300 350
(b) (b1) 3-bit precision (b2) 5-bit precisionTesting for GDFQ Training for AdaDFQ
Testing for AdaDFQ Training for GDFQ
Generated samples with
adaptive adaptabilityGenerated samples
Generated samplesGenerated samples with
adaptive adaptability
G Q G Q G G Q G Q G
9
100 200 250
Testing for GDFQ Training for AdaDFQ
Testing for AdaDFQ
3
2456Classification loss7
Epoch0 80 160 240 3208
1
023456
Epoch0 50 100 150 200 250 300 350
Generated samples with
adaptive adaptabilityGenerated samples
Generated samples with
adaptive adaptabilityGenerated samplesTraining for GDFQ
(a) 3-bit precision (b) 5-bit precisionFigure 1. Existing work, e.g., GDFQ [22] (the blue), generally
suffers from (a) underfitting issue (both training and testing loss
are large) under 3-bit precision and (b) overfitting issue (training
loss is small while testing loss is large) under 5-bit precision1. Our
AdaDFQ (the green) generates the sample with adaptive adaptabil-
ity to Q, yielding better generalization of Q with varied bit widths.
The observations are from MobileNetV2 on ImageNet.
the model size by mapping the floating-point weights and
activations to low-bit ones. Quantization methods generally
recover the performance loss from the quantization errors,
such as fine-tuning or calibration operations with the origi-
nal training data.
However, the original data may not be accessible due to
privacy and security issues. To this end, data-free quan-
tization (DFQ) has come up to quantize models without
the original data by synthesizing meaningful fake samples,
where the quantized network (Q) is improved by distill-
ing the knowledge from the pre-trained full-precision model
(P) [5, 18]. Among the existing arts [1, 23], the increasing
attention has recently transformed to the generative mod-
els [3, 22, 25], which generally captures the distribution of
the original data from P by utilizing the generative model
as a generator (G), where P serves as the discriminator to
guide the generation process [22]. To narrow the gap be-
tween the synthetic and real data, [3] proposes to restore
|
Nag_Post-Processing_Temporal_Action_Detection_CVPR_2023
|
Abstract
Existing Temporal Action Detection (TAD) methods typ-
ically take a pre-processing step in converting an input
varying-length video into a fixed-length snippet represen-
tation sequence, before temporal boundary estimation and
action classification. This pre-processing step would tem-
porally downsample the video, reducing the inference res-
olution and hampering the detection performance in the
original temporal resolution. In essence, this is due to a
temporal quantization error introduced during resolution
downsampling and recovery. This could negatively impact
the TAD performance, but is largely ignored by existing
methods. To address this problem, in this work we intro-
duce a novel model-agnostic post-processing method with-
out model redesign and retraining. Specifically, we model
the start and end points of action instances with a Gaus-
sian distribution for enabling temporal boundary inference
at a sub-snippet level. We further introduce an efficient
Taylor-expansion based approximation, dubbed as Gaus-
sian Approximated Post-processing (GAP ). Extensive ex-
periments demonstrate that our GAP can consistently im-
prove a wide variety of pre-trained off-the-shelf TAD mod-
els on the challenging ActivityNet (+0.2% ∼0.7% in aver-
age mAP) and THUMOS (+0.2% ∼0.5% in average mAP)
benchmarks. Such performance gains are already signif-
icant and highly comparable to those achieved by novel
model designs. Also, GAP can be integrated with model
training for further performance gain. Importantly, GAP
enables lower temporal resolutions for more efficient in-
ference, facilitating low-resource application. The code is
available at https://github.com/sauradip/GAP
|
1. Introduction
The objective of Temporal action detection (TAD) is to
identify both the temporal interval ( i.e., start and end points)
and the class label of all action instances in an untrimmed
video [3, 7]. Given a test video, existing TAD methods
Figure 1. A typical pipeline for temporal action detection. (a) For
efficiency and model design ease, temporal resolution reduction
is often applied during pre-processing. This causes model infer-
ence at lower (coarse) temporal resolutions. (b) After bringing the
prediction results back to the original temporal resolution during
inference, quantization error will be introduced inevitably.
typically generate a set of action instance candidates via
proposal generation based on regressing predefined anchor
boxes [4, 6, 13, 23] or directly predicting the start and end
times of proposals [2,9,10,15,25–27] and global segmenta-
tion masking [14]. To facilitate deep model design and im-
prove computational efficiency, most TAD methods would
pre-process a varying-length video into a fixed-length snip-
pet sequence by first extracting frame-level visual features
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18837
Figure 2. Conventional snippet-level TAD inference along with
our proposed sub-snippet-level post-processing.
with a frozen video encoders and subsequently sampling a
smaller number of feature points ( i.e., snippet) evenly (see
Fig. 1(a)). As a result, a TAD model performs the infer-
ence at lower temporal resolutions . This introduces a tem-
poral quantization error that could hamper the model per-
formance. For instance, when decreasing video temporal
resolution from 400 to 25, the performance of BMN [9] de-
grades significantly from 34.0% to 28.1% in mAP on Ac-
tivityNet. Despite the obvious connection between the er-
ror and performance degradation, this problem is largely ig-
nored by existing methods.
In this work, we investigate the temporal quantization
error problem from a post-processing perspective. Specif-
ically, we introduce a model-agnostic post-processing ap-
proach for improving the detection performance of exist-
ing off-the-shelf TAD models without model retraining. To
maximize the applicability, we consider the TAD inference
as a black-box process. Concretely, taking the predictions
by any model, we formulate the start and end points of ac-
tion instances with a Gaussian distribution in a continuous
snippet temporal resolution. We account for the distribution
information of temporal boundaries via Taylor-expansion
based approximation. This enables TAD inference at sub-
snippet precision (Fig. 2), creating the possibility of allevi-
ating the temporal quantization error. We name our method
asGaussian Approximated Post-processing (GAP ).
We summarize the contributions as follows. (I) We iden-
tify the previously neglected harming effect of temporal res-
olution reduction during the pre-processing step in tempo-
ral action detection. (II) For the first time, we investigate
the resulting temporal quantization error problem from a
model generic post-processing perspective. This is realized
by modeling the action boundaries with a Gaussian distri-
bution along with an efficient Taylor-expansion based ap-
proximation. (III) Extensive experiments show that a wide
range of TAD models [2,9,10,15,25–27] can be seamlesslybenefited from our proposed GAP method without algorith-
mic modification and model retraining, achieving the best
single model accuracy on THUMOS and ActivityNet. De-
spite this simplicity, the performance improvement obtained
from GAP can match those achieved by designing novel
models [5]. At the cost of model retraining, our GAP can
be integrated with existing TAD models for achieving fur-
ther gain. Further, our GAP favorably enables lower tem-
poral resolutions for higher inference efficiency with little
performance degradation. Crucially, GAP can be applied
generally in a variety of learning settings ( e.g., supervised,
semi-supervised, zero-shot, few-shot).
|
Plesh_GlassesGAN_Eyewear_Personalization_Using_Synthetic_Appearance_Discovery_and_Targeted_Subspace_CVPR_2023
|
Abstract
We present GlassesGAN, a novel image editing frame-
work for custom design of glasses, that sets a new stan-
dard in terms of output-image quality, edit realism, and
continuous multi-style edit capability. To facilitate the
editing process with GlassesGAN, we propose a Targeted
Subspace Modelling (TSM) procedure that, based on a
novel mechanism for (synthetic) appearance discovery in
the latent space of a pre-trained GAN generator, constructs
an eyeglasses-specific (latent) subspace that the editing
framework can utilize. Additionally, we also introduce an
appearance-constrained subspace initialization (SI) tech-
nique that centers the latent representation of the given in-
put image in the well-defined part of the constructed sub-
space to improve the reliability of the learned edits. We
test GlassesGAN on two (diverse) high-resolution datasets
(CelebA-HQ and SiblingsDB-HQf) and compare it to three
state-of-the-art baselines, i.e., InterfaceGAN, GANSpace,
and MaskGAN. The reported results show that GlassesGAN
convincingly outperforms all competing techniques, while
*Supported by ARRS J2-2501(A), the Fulbright Scholarship Fund, the
Center for Identification Technology Research, and the National Science
Foundation under Grant No. 1650503.offering functionality (e.g., fine-grained multi-style editing)
not available with any of the competitors. The source code
for GlassesGAN is made publicly available.
|
1. Introduction
Consumers are increasingly choosing the convenience
of online shopping over traditional brick-and-mortar
stores [3]. For the apparel industry, which traditionally re-
lied on individuals being able to try on items to suit their
taste and body shape before purchasing, the shift to digital
commerce has created an unsustainable cycle of purchasing,
shipping, and returns. Now, an estimated 85% of manufac-
tured fashion items end up in landfills each year, largely due
to consumer returns and unsatisfied online customers [35].
In response to these challenges, the computer vision
community has become increasingly interested in virtual-
try-on (VTON) techniques [3,6,9,11] that allow for the de-
velopment of virtual fitting rooms, where consumers can
try on clothing in a virtual setting. Furthermore, such tec-
hniques also give users the flexibility to explore custom
designs and personalize fashion items by rendering them
photo-realistically in the provided input image.
While considerable progress has been made recently in
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16847
image-based virtual try-on techniques for clothing and ap-
parel that do not require (costly) dedicated hardware and
difficult-to-acquire 3D annotated data [5,6,11,18,48], most
deployed solutions for virtual eyewear try-on still largely
rely on traditional computer graphics pipelines and 3D
modeling [1,29,30,49,51]. Such 3D solutions provide con-
vincing results, but save for a few exceptions, e.g., [14],
are only able to handle predefined glasses and do not sup-
port custom designs and eyewear personalization. Although
work has also been done with 2D (image) data only, rele-
vant research for virtual eyewear try-on has mostly focused
on editing technology (facilitated by Generative Adversarial
Network - GANs [10]) capable of inserting glasses into an
image [15, 27, 34, 37, 44]. The images these methods gen-
erate are often impressive, but adding eyewear with finely
tunable appearance control still remains challenging.
In this work, we address this gap by introducing Glasses-
GAN, a flexible image editing framework that allows users
to add glasses to a diverse range of input face images (at a
high-resolution) and control their appearance. Distinct from
existing virtual try-on work in the vision literature, the goal
of GlassesGAN is not to try on existing glasses, but rather
to allow users to explore custom eyewear designs. Glass-
esGAN is designed as a GAN inversion method [45], that
uses a novel Targeted Subspace Modeling (TSM) technique
to identify relevant directions within the latent space of a
pre-trained GAN model that can be utilized to manipulate
the appearance of eyeglasses in the edited images. A key
component of GlassesGAN is a new Synthetic Appearance
Discovery (SAD) mechanism that samples the GAN latent
space for eyeglasses appearances, without requiring real-
world facial images with eyewear. Additionally, we propose
anappearance-constrained subspace initialization proce-
dure for the (inference-time) editing stage, which helps to
produce consistent editing results across a diverse range of
input images. We evaluate GlassesGAN in comprehensive
experiments over two test datasets and in comparison to
state-of-the-art solutions from the literature, with highly en-
couraging results.
In summary, our main contributions in this paper are:
• We present GlassesGAN, an image editing framework
for custom design of eyeglasses in a virtual try-on set-
ting that sets a new standard in terms of output image
quality, edit realism, and continuous multi-style edit
capability, as illustrated in Figure 1.
• We introduce a Synthetic Appearance Discovery
(SAD) mechanism and a Targeted Subspace Model-
ing (TSM) procedure, capable of capturing eyeglasses-
appearance variations in the latent space of GAN mod-
els using glasses-free facial images only.
• We introduce a novel initialization procedure for the
editing process that improves the reliability of the fa-
cial manipulations across different input images.
|
Pathiraja_Multiclass_Confidence_and_Localization_Calibration_for_Object_Detection_CVPR_2023
|
Abstract
Albeit achieving high predictive accuracy across many
challenging computer vision problems, recent studies sug-
gest that deep neural networks (DNNs) tend to make over-
confident predictions, rendering them poorly calibrated.
Most of the existing attempts for improving DNN calibra-
tion are limited to classification tasks and restricted to cal-
ibrating in-domain predictions. Surprisingly, very little
to no attempts have been made in studying the calibra-
tion of object detection methods, which occupy a pivotal
space in vision-based security-sensitive, and safety-critical
applications. In this paper, we propose a new train-time
technique for calibrating modern object detection meth-
ods. It is capable of jointly calibrating multiclass confi-
dence and box localization by leveraging their predictive
uncertainties. We perform extensive experiments on several
in-domain and out-of-domain detection benchmarks. Re-
sults demonstrate that our proposed train-time calibration
method consistently outperforms several baselines in reduc-
ing calibration error for both in-domain and out-of-domain
predictions. Our code and models are available at https:
//github.com/bimsarapathiraja/MCCL
|
1. Introduction
Deep neural networks (DNNs) are the backbone of many
top-performing systems due to their high predictive perfor-
mance across several challenging domains, including com-
puter vision [16,17,41,45,52] and natural language process-
ing [5,7]. However, some recent works [14,15,38,47] report
that DNNs are susceptible to making overconfident predic-
tions, which leaves them miscalibrated. This not only spurs
a mistrust in their predictions, but more importantly, could
lead to disastrous consequences in several safety-critical ap-
plications, such as healthcare diagnosis [8, 43], self-driving
cars [13], and legal research tools [50]. For instance, in self-
driving cars, if the perception component wrongly detects a
stop sign as a speed limit sign with high confidence, it can
potentially lead to disastrous outcomes.Several strategies have been proposed in the recent past
for improving model calibration. A simple calibration tech-
nique is a post-processing step that re-scales the outputs
of a trained model using parameters which are learnt on
a hold-out portion of the training set [14]. Despite being
easy to implement, these post-processing approaches are
restrictive. They assume the availability of a hold-out set,
which is not always possible in many real-world settings.
Another route to reducing calibration error is train-time cal-
ibration techniques, which intervene at the training time by
involving all model parameters. Typically train-time cali-
bration methods feature an auxiliary loss term that is added
to the application-specific loss function to regularize predic-
tions [18, 27, 33, 35].
We note that almost all prior efforts towards improving
model calibration target the task of visual image classifica-
tion. Surprisingly, little to no noticeable attempts have been
made in studying the calibration of visual object detection
models. Visual object detection methods account for a ma-
jor and critical part of many vision-based decision-making
systems. Moreover, most of the current calibration tech-
niques only aim at reducing calibration error for in-domain
predictions. However, in many realistic settings, it is likely
that, after model deployment, the incoming data distribution
could continuously change from the training data distribu-
tion. In essence, the model should be well-calibrated for
both in-domain and out-of-domain predictions.
To this end, in this paper, we aim to study the cali-
bration of (modern) deep learning-based object detection
methods. In this pursuit, we observe that, (a) object detec-
tion methods are intrinsically miscalibrated, (b) besides dis-
playing noticeable calibration errors for in-domain predic-
tions, they are also poorly calibrated for out-of-domain pre-
dictions and, (c) finally, the current calibration techniques
for classification are sub-optimal for object detection (Fig-
ure 1). Towards improving the calibration performance of
object detection methods, inspired by the train-time cali-
bration route, we propose a new train-time calibration ap-
proach aims at jointly calibrating the predictive multiclass
confidence and bounding box localization.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19734
(a)
(b)
(c)
Figure 1. DNN-based object detectors are inherently miscalibrated for both in-domain and out-of-domain predictions. Also, calibration
methods for image classification are sub-optimal for object detection. Our proposed train-time calibration method for object detection is
capable of reducing the calibration error (D-ECE%) of DNN-based detectors in both in-domain and out-domain scenarios.
Contributions: (1) We study the relatively unexplored di-
rection of calibrating modern object detectors and observe
that they are intrinsically miscalibrated in both in-domain
and out-of-domain predictions. Also, the existing calibra-
tion techniques for classification are sub-optimal for cali-
brating object detectors. (2)We propose a new train-time
calibration method for detection, at the core of which is
an auxiliary loss term, which attempts to jointly calibrate
multiclass confidences and bounding box localization. We
leverage predictive uncertainty in multiclass confidences
and bounding box localization. (3)Our auxiliary loss term
is differentiable, operates on minibatches, and can be uti-
lized with other task-specific loss functions. (4)We perform
extensive experiments on challenging datasets, featuring
several in-domain and out-of-domain scenarios. Our train-
time calibration method consistently reduces the calibra-
tion error across DNN-based object detection paradigms,
including FCOS [45] and Deformable DETR [52], both in
in-domain and out-of-domain predictions.
|
Lu_Learning_Spatial-Temporal_Implicit_Neural_Representations_for_Event-Guided_Video_Super-Resolution_CVPR_2023
|
Abstract
Event cameras sense the intensity changes asyn-
chronously and produce event streams with high dynamic
range and low latency. This has inspired research endeav-
ors utilizing events to guide the challenging video super-
resolution (VSR) task. In this paper, we make the first at-
tempt to address a novel problem of achieving VSR at ran-
dom scales by taking advantages of the high temporal res-
olution property of events. This is hampered by the diffi-
culties of representing the spatial-temporal information of
events when guiding VSR. To this end, we propose a novel
framework that incorporates the spatial-temporal interpo-
lation of events to VSR in a unified framework. Our key
idea is to learn implicit neural representations from queried
spatial-temporal coordinates and features from both RGB
frames and events. Our method contains three parts. Specif-
ically, the Spatial-Temporal Fusion ( STF) module first
learns the 3D features from events and RGB frames. Then,
the Temporal Filter ( TF) module unlocks more explicit mo-
tion information from the events near the queried times-
tamp and generates the 2D features. Lastly, the Spatial-
Temporal Implicit Representation ( STIR ) module recovers
the SR frame in arbitrary resolutions from the outputs of
these two modules. In addition, we collect a real-world
dataset with spatially aligned events and RGB frames. Ex-
tensive experiments show that our method significantly sur-
passes the prior-arts and achieves VSR with random scales,
e.g., 6.5. Code and dataset are available at https:
//vlis2022.github.io/cvpr23/egvsr .
|
1. Introduction
Video super-resolution (VSR) is a task of recovering
high-resolution (HR) frames from successive multiple low-
resolution (LR) frames. Unlike LR videos, HR videos con-
tain more visual information, e.g., edge and texture, which
*These authors are co-first authors.
†These authors are co-second authors.
‡Corresponding author
STFTF𝐸!𝐸STIR
Decoder{𝑓"}(𝑥,𝑦,𝑡)
𝐸𝐸!𝑓"#$𝑓"𝑓"%$(a)(b)26.57.4Scale
3.75.2Figure 1. (a) Our method learns implicit neural representations
(INR) from the queried spatial-temporal coordinates (STF) and
temporal features (TF) from RGB frames and events. (b) An ex-
ample of VSR with random scale factors, e.g., 6.5, by our method.
can be very helpful for many tasks, e.g., metaverse [52],
surveillance [15] and entertainment [4]. However, VSR is
a highly ill-posed problem owing to the loss of both spa-
tial and temporal information, especially in the real-world
scenes [5, 6, 26, 45]. Recently, deep learning-based algo-
rithms have been successfully applied to learn the intra-
frame correlation and temporal consistency from the LR
frames to recover HR frames, e.g., DUF [19], EDVR [48],
RBPN [14], BasicVSR [5], BasicVSR++ [6]. However, due
to the lack of inter-frame information, these methods are
hampered by the limitations of modeling the spatial and
temporal dependencies and may fail to recover HR frames
in complex scenes.
Event cameras are bio-inspired sensors that can asyn-
chronously detect the per-pixel intensity changes and gen-
erate event streams with low latency (1 us) and high dy-
namic range (HDR) compared with the conventional frame-
based cameras (140 dBvs. 60 dB) [35, 51]. This has
sparked extensive research in reconstructing image/video
from events [12, 29, 42, 44, 46, 53]. However, the recon-
structed results are less plausible due to the loss of visual
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1557
details, e.g., structures, and textures. As a result, a recent
work has utilized events for guiding VSR [18], trying to ‘in-
ject’ energy from the event-based to the frame-based cam-
eras. It leverages the high-frequency event data to synthe-
size neighboring frames, so as to find correspondences be-
tween consecutive frames. However, it only treats video
frames in discrete ways with 2D arrays of pixels and up-
samples them at a fixed up-scaling factor e.g.,×2or×4.
This causes inconvenience and inflexibility in the applica-
tions of SR videos, which often require arbitrary resolu-
tions, i.e., random scales.
Recently, some works tried to learn continuous image
representations with arbitrary resolutions, e.g., LIIF [7],
taking 2D queried coordinates and 2D features as input to
learn an implicit neural representation (INR). VideoINR [8],
on the other hand, decodes the LR videos into arbitrary
spatial resolutions and frame rates by learning from the
spatial coordinates and temporal coordinates, respectively.
However, it is still unclear how to leverage events to guide
learning spatial-temporal INRs for VSR. This is hampered
by two challenges. Firstly, although event data can ben-
efit VSR with its high-frequency temporal and spatial in-
formation, the large modality gap between the events and
video frames makes it challenging to use INRs to represent
the 3D spatial-temporal coordinates with event data. More-
over, there lacks HR real-world dataset with spatially well-
aligned events and frames.
In this paper, we make the first attempt to address a novel
problem of achieving VSR at random scales by taking ad-
vantage of the high-temporal resolution property of events.
Accordingly, we propose a novel framework that subtly in-
corporates the spatial-temporal interpolation from events to
VSR in a unified framework, as shown in Fig. 1. Our key
idea is to learn INRs from the queried spatial-temporal coor-
dinates and features from both the RGB frames and events.
Our framework mainly includes three parts. The Spatial-
Temporal Fusion ( STF) branch learns the spatial-temporal
information from events and RGB frames (Sec. 3.2). The
shallow feature fusion and deep feature fusion are em-
ployed to narrow the modality gap and fuse the events and
RGB frames into 3D global spatial-temporal representa-
tions. Then, the Temporal Filter ( TF) branch further un-
locks more explicit motion information from events. It
learns the 2D event features from events nearing the queried
timestamp (Sec. 3.3). With the features from the STF
and TF branches, the Spatial-Temporal Implicit Represen-
tation ( STIR ) module decodes the features and recovers SR
frames with arbitrary spatial resolutions(Sec. 3.4). That is,
given the arbitrary queried coordinates, we apply 3D sam-
pling and 2D sampling to the fused 3D features and event
data separately. Finally, the sampling features are added
and fed into a decoder, and generate targeted SR frames. In
addition, we collect a real-world dataset with a spatial reso-lution of 3264×2248 , in which the events and RGB frames
are spatially aligned. Extensive experiments on two real-
world datasets show that our method surpasses the existing
methods by 1.3 dB .
In summary, the main contributions of this paper are five-
fold: ( I) Our work serves as the first attempt to address a
non-trivial problem of learning INRs from events and RGB
frames for VSR at random scales. ( II) We propose the STF
branch and the TF branch to model the spatial and tempo-
ral dependencies from events and RGB frames. ( III) We
propose the STIR module to reconstruct RGB frames with
arbitrary spatial resolutions. ( IV) We collect a high-quality
real-world dataset with spatially aligned events and RGB
frames. ( V) Our method significantly surpasses the existing
methods and achieves SR with random-scales, e.g., 6.5.
|
Qraitem_Bias_Mimicking_A_Simple_Sampling_Approach_for_Bias_Mitigation_CVPR_2023
|
Abstract
Prior work has shown that Visual Recognition datasets
frequently underrepresent bias groups B(e.g. Female)
within class labels Y(e.g. Programmers). This dataset bias
can lead to models that learn spurious correlations between
class labels and bias groups such as age, gender, or race.
Most recent methods that address this problem require sig-
nificant architectural changes or additional loss functions
requiring more hyper-parameter tuning. Alternatively, data
sampling baselines from the class imbalance literature ( e.g.
Undersampling, Upweighting), which can often be imple-
mented in a single line of code and often have no hyper-
parameters, offer a cheaper and more efficient solution.
However, these methods suffer from significant shortcom-
ings. For example, Undersampling drops a significant part
of the input distribution per epoch while Oversampling re-
peats samples, causing overfitting. To address these short-
comings, we introduce a new class-conditioned sampling
method: Bias Mimicking. The method is based on the obser-
vation that if a class cbias distribution, i.e.PD(B|Y=c)
is mimicked across every c′̸=c, then YandBare statisti-
cally independent. Using this notion, BM, through a novel
training procedure, ensures that the model is exposed to
the entire distribution per epoch without repeating samples.
Consequently, Bias Mimicking improves underrepresented
groups’ accuracy of sampling methods by 3% over four
benchmarks while maintaining and sometimes improving
performance over nonsampling methods. Code: https:
//github.com/mqraitem/Bias-Mimicking
|
1. Introduction
Spurious predictive correlations have been frequently
documented within the Deep Learning literature [33, 37].
These correlations can arise when most samples in class
ac(e.g. blonde hair) belong to a bias group s(e.g. fe-
male). Thus, the model might learn to predict classes by us-
ing their membership to their bias groups ( e.g. more likely
to predict blonde hair if a sample is female). Mitigating
such spurious correlations (Bias) involves decorrelating the
Figure 1. Comparison of sampling approaches for mitigating bias
of class labels Y(Hair Color) toward sensitive group labels B
(Gender). (a)illustrates Undersampling/Oversampling methods
that drop/repeat samples respectively from a dataset Dper epoch
and thus ensure that PD(Y|B) =PD(Y). However, dropping
samples hurt the model’s predictive performance, and repeating
samples can cause overfitting with over-parameterized models like
neural nets [34]. (b)shows our Bias Mimicking approach which
subsamples Dand produces three variants. Each variant, denoted
asdc⊂D, preserves class csamples ( i.e. mimicked class) and
mimics the bias of class cin each c′̸=c. This mimicking pro-
cess, as we show in our work, ensures that Pdc(Y|B) =Pdc(Y).
Moreover, by using each dcseparately to train the model, we ex-
pose it to all the samples in Dper epoch, and since we do not
repeat samples in each dc, our method is less prone to overfitting.
model’s predictions of input samples from their member-
ship to bias groups. Previous research efforts have primar-
ily focused on model-based solutions. These efforts can be
mainly categorized into two directions 1) ensemble-based
methods [34], which introduce separate prediction heads for
samples from different bias groups 2) methods that intro-
duce additional bias regularizing loss functions and require
additional hyper-parameter tuning [12, 15, 25, 26, 32].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20311
Dataset resampling methods, popular within the class
imbalance literature [3, 8, 13, 28], present a simpler and
cheaper alternative. They do not require hyperparameter
tuning or extra model parameters. Therefore, they are faster
to train. Moreover, as illustrated in Figure 1(a), they can be
extended to Bias Mitigation by considering the imbalance
within the dataset subgroups rather than classes. Most com-
mon of these methods are Undersampling [3, 13, 28] and
Oversampling [34]. They mitigate class imbalance by alter-
ing the dataset distribution through dropping/repeating sam-
ples, respectively. Another similar solution is Upweight-
ing [4,29], which levels each sample contribution to the loss
function by appropriately weighting its loss value. How-
ever, these methods suffer from significant shortcomings.
For example, Undersampling drops a significant portion of
the dataset per epoch, which could harm the models’ predic-
tive capacity. Moreover, Upweighting can be unstable when
used with stochastic gradient descent [2]. Finally, models
trained with Oversampling, as shown by [34], are likely to
overfit due to being exposed to repetitive sample copies.
To address these problems, we propose Bias Mimick-
ing (BM): a class-conditioned sampling method that mit-
igates the shortcomings of prior work. As shown in Fig-
ure 1(b), given a dataset Dwith a set of three classes C,
BM subsamples Dand produces three different variants.
Each variant, dc⊂Dretains every sample from class c
while subsampling each c′̸=csuch that c′bias distribu-
tion, i.e.Pdc(B|Y=c′), mimics that of c. For example,
observe dBlonde Hair in Figure 1(b) bottom left. Note how the
bias distribution of class ”Blonde Hair” remains the same
while the bias distributions of ”Black Hair” and ”Red Hair”
are subsampled such that they mimic the bias distribution
of ”Blonde Hair”. This mimicking process decorrelates Y
fromBsince YandBare now statistically independent as
we prove in Section 3.1.
The strength of our method lies in the fact that dcre-
tains class csamples while at the same time ensuring
Pdc(Y|B) =Pdc(Y)in each dc. Using this result, we intro-
duce a novel training procedure that uses each distribution
separately to train the model. Consequently, the model is
exposed to the entirety of Dsince each dcretains class c
samples. Refer to Section 3.1 for further details. Note how
our method is fundamentally different from Undersampling.
While Undersampling also ensures statistical independence
on the dataset level, it subsamples every subgroup. There-
fore, the training distribution per epoch is a smaller portion
of the total dataset D. Moreover, our method is also differ-
ent from Oversampling since each dcdoes not repeat sam-
ples. Thus we reduce the risk of overfitting.
In addition to proposing Bias Mimicking, another con-
tribution of our work is providing an extensive analysis
of sampling methods for bias mitigation. We found many
sampling-based methods were notably missing in the com-parisons used in prior work [12,32,34]. Despite their short-
comings, we show that Undersampling and Upweighting
are surprisingly competitive on many bias mitigation bench-
marks. Therefore, this emphasizes these methods’ impor-
tance as an inexpensive first choice for mitigating bias.
However, in cases where these methods are ineffective, Bias
Mimicking bridges the performance gap and achieves com-
parable performance to nonsampling methods. Finally, we
thoroughly analyze our approach’s behavior through two
experiments. First, we verify the importance of each dcto
the model’s predictive performance in Section 4.2. Second,
we investigate our method’s sensitivity to the mimicking
condition in Section 4.3. Both experiments showcase the
importance of our design in mitigating bias.
Our contributions can be summarized as:
• We show that simple sampling methods can be compet-
itive on some benchmarks when compared to non sam-
pling state-of-the-art approaches.
• We introduce a novel resampling method: Bias Mimick-
ing that bridges the performance gap between sampling
and nonsampling methods; it improves the average under-
represented subgroups accuracy by >3%compared to
other sampling methods.
• We conduct an extensive empirical analysis of Bias Mim-
icking that details the method’s sensitivity to the Mimick-
ing condition and uncovers insights about its behavior.
|
Li_One-Shot_High-Fidelity_Talking-Head_Synthesis_With_Deformable_Neural_Radiance_Field_CVPR_2023
|
Abstract
Talking head generation aims to generate faces that
maintain the identity information of the source image and
imitate the motion of the driving image. Most pioneering
methods rely primarily on 2D representations and thus will
inevitably suffer from face distortion when large head ro-
tations are encountered. Recent works instead employ ex-
plicit 3D structural representations or implicit neural ren-
dering to improve performance under large pose changes.
Nevertheless, the fidelity of identity and expression is not so
desirable, especially for novel-view synthesis. In this paper,
we propose HiDe-NeRF , which achieves high-fidelity and
free-view talking-head synthesis. Drawing on the recently
proposed Deformable Neural Radiance Fields, HiDe-NeRF
represents the 3D dynamic scene into a canonical appear-
ance field and an implicit deformation field, where the for-
mer comprises the canonical source face and the latter
models the driving pose and expression. In particular, we
improve fidelity from two aspects: (i) to enhance identity ex-
pressiveness, we design a generalized appearance modulethat leverages multi-scale volume features to preserve face
shape and details; (ii) to improve expression preciseness,
we propose a lightweight deformation module that explic-
itly decouples the pose and expression to enable precise ex-
pression modeling. Extensive experiments demonstrate that
our proposed approach can generate better results than pre-
vious works. Project page: https:/ /www.waytron.
net/hidenerf/
|
1. Introduction
Talking-head synthesis aims to preserve the identity in-
formation of the source image and imitate the motion of the
driving image. Synthesizing talking faces of a given person
driven by other speaker is of great importance to various ap-
plications, such as film production, virtual reality, and digi-
tal human. Existing talking head methods are not capable of
* denotes equal contribution
† denotes the corresponding authors
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17969
generating high-fidelity results, they cannot precisely pre-
serve the source identity or mimic the driving expression.
Most pioneering approaches [11, 17, 24, 35, 37, 44, 44]
learn source-to-driving motion to warp the source face
to the desired pose and expression. According to the
warping types, previous works can be roughly divided
into: 2D warping-based methods, mesh-based methods,
and neural rendering-based methods. 2D warping-based
methods [17, 35, 37] warps source feature based on the
motion field estimated from the sparse keypoints. However,
these methods encounter the collapse of facial structure
and expression under large head rotations. Moreover, they
cannot fully disentangle the motion with identity informa-
tion of the driving image, resulting in a misguided face
shape. Mesh-based methods [13] are proposed to tackle the
problem of facial collapse by using 3D Morphable Models
(3DMM) [4, 30] to explicitly model the geometry. Limited
by non-rigid deformation modeling ability of 3DMM,
such implementation leads to rough and unnatural facial
expressions. Besides, it ignores the influence of vertex
offset on face shape, resulting in low identity fidelity. With
the superior capability in multi-view image synthesis of
Neural Radiance Fields (NeRF) [25], a concurrent work
named FNeVR [47] takes the merits of 2D warping and
3D neural rendering. It learns a 2D motion field to warp
the source face and utilizes volume rendering to refine the
warped features to obtain final results. Therefore, it inherits
the same problem as other warping-based methods.
To address above issues and improve the fidelity of
talking head synthesis, we propose High-fideltiy and
Deformable NeRF, dubbed HiDe-NeRF. Drawing on the
idea of recently emerged Deformable NeRF [1, 27, 28].
HiDe-NeRF represents the 3D dynamic scene into a canon-
ical appearance field and an implicit deformation field. The
former is a radiance field of the source face in canonical
pose, and the latter learns the backward deformation for
each observed point to shift them into the canonical field
and retrieve their volume features for neural rendering. On
this basis, we devise a Multi-scale Generalized Appear-
ance module (MGA) to ensure identity expressiveness and a
Lightweight Expression-aware Deformation module (LED)
to improve expression preciseness.
To elaborate, MGA encodes the source image into multi-
scale canonical volume features, which integrate high-level
face shape information and low-level facial details, for bet-
ter identity preservation. We employ the tri-plane [7, 31] as
volume feature representation in this work for two reasons:
(i) it enables generalization across 3D representations; (ii)
it is fast and scales efficiently with resolution, facilitating
us to build hierarchical feature structures. Moreover, we
modify the ill-posed tri-plane representation by integrating
a camera-to-world feature transformation, so that we can
extract the planes from the source image with full control ofidentity. This distinguishes our model from those identity-
uncontrollable approaches [3, 7] that generate the planes
from noise with StyleGAN2-based generators [20]. No-
tably,the MGA enables our proposed HiDe-NeRF to be im-
plemented in a subject-agnostic manner, breaking the limi-
tation that existing Deformable NeRFs can only be trained
for a specific subject.
The deformations in talking-head scenes could be de-
composed into the global pose and local expression de-
formation. The former is rigid and easy to handle, while
the latter is non-rigid and difficult to model. Existing De-
formable NeRFs predict them as a whole, hence failing
to capture precise expression. Instead, our proposed LED
could explicitly decouple the expression and pose in defor-
mation prediction, thus significantly improving the expres-
sion fidelity. Specifically, it uses a pose-agnostic expression
encoder and a position encoder to obtain the latent expres-
sion embeddings and latent position embeddings, where the
former models the expression independently and the latter
encodes positions of points sampled from rays under arbi-
trary observation views. Then, a deformation decoder takes
the combination of two latent embeddings as input and out-
puts point-wise deformation. In this way, our work achieves
precise expression manipulation and maintains expression
consistency for free-view rendering (as shown in Fig. 1).
To summarize, the contributions of our approach are:
• Firstly, we introduce the HiDe-NeRF for high-fidelity
and free-view talking head synthesis. To the best of
our knowledge, HiDe-NeRF is the first one-shot and
subject-agnostic Deformable Neural Radiance Fields.
• Secondly, we propose the Multi-scale Generalized
Appearance module (MGA) and the Lightweight
Expression-aware Deformation module (LED) to sig-
nificantly improve the fidelity of identity and expres-
sion in talking-head synthesis.
• Lastly, extensive experiments demonstrate that our
proposed approach can generate more realistic results
than state-of-the-art in terms of capturing the driving
motion and preserving the source identity information.
|
Mall_Change-Aware_Sampling_and_Contrastive_Learning_for_Satellite_Images_CVPR_2023
|
Abstract
Automatic remote sensing tools can help inform many
large-scale challenges such as disaster management, cli-
mate change, etc. While a vast amount of spatio-temporal
satellite image data is readily available, most of it remains
unlabelled. Without labels, this data is not very useful for
supervised learning algorithms. Self-supervised learning
instead provides a way to learn effective representations for
various downstream tasks without labels. In this work, we
leverage characteristics unique to satellite images to learn
better self-supervised features. Specifically, we use the tem-
poral signal to contrast images with long-term and short-
term differences, and we leverage the fact that satellite im-
ages do not change frequently. Using these characteristics,
we formulate a new loss contrastive loss called Change-
Aware Contrastive (CACo) Loss. Further, we also present a
novel method of sampling different geographical regions.
We show that leveraging these properties leads to better
performance on diverse downstream tasks. For example,
we see a 6.5% relative improvement for semantic segmenta-
tion and an 8.5% relative improvement for change detection
over the best-performing baseline with our method.
|
1. Introduction
Our planet is surrounded by a large number of satellites
constantly collecting images of the world. This massive
trove of visual information can help monitor phenomena at
the world-scale, and inform solutions to global problems
such as climate change or loss of biodiversity. Automatic
vision tools can help by, for example, monitoring land-use
change over time [36] or the evolution of urban areas [6].
However, training all these models requires labeled data .
Unfortunately, labeling the massive trove of satellite images
is expensive, more so than internet images because of the
expertise necessary. This issue is exacerbated by the differ-
ent label requirements of many monitoring applications.
One way to alleviate this problem of limited labeled
data is to use self-supervised learning techniques to learn
a good feature representation from unlabeled satellite im-
Figure 1. Images of the same location at three different times.
Changes from 2016 to 2020 are due to major urban development,
while those from July to September are seasonal variations. Our
approach, CACo, learns features that are sensitive to the former
but invariant to the latter.
agery. This representation can then be further finetuned
with much fewer labels for specific applications. Mod-
ern self-supervised learning approaches are based on con-
trastive learning. These techniques train a feature space so
that each image in the dataset is embedded close to aug-
mented versions of itself (e.g., with jittered colors) but far
from other images of the dataset. A possible approach is to
directly apply these techniques on satellite image datasets.
However, the spatio-temporal structure of satellite imagery
is much richer than the unstructured collections of internet
images typically used in standard self-supervised learning.
In this work, we ask, how can we best leverage the structure
of satellite images for better self-supervised learning?
The first important aspect of the spatio-temporal struc-
ture of satellite images is the availability of multiple
temporally-spaced images for the same location. Past work
has used this structure to sample images spread over a few
weeks or months from each location to encourage invari-
ance to seasonal variations [25]. However, we can access
images not just over a few months, but over many years .
Over such long time spans, we often see significant, per-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5261
manent change, such as the construction of houses, the dry-
ing of lakes, or the logging of forests (Fig. 1). While we
want feature representations to be invariant to temporary,
seasonal change, we want the representation to be sensitive
to permanent, long-term change. To capture this intuition,
we sample multiple images from the same location covering
both short and long time spans and encourage invariance to
the former but sensitivity to the latter.
A second and more important aspect of satellite images
is that permanent change is spatially rare. Change is con-
centrated near urban areas, but many parts of the planet see
little change. When there is no change, we want our feature
representation to be the same even over long time spans.
We capture this insight with a novel strategy to robustly
estimate whether or not there is a long-term change, even
in the middle of training, by comparing long-term feature
differences to short-term variations. We then design a loss
function that is conditional on this change estimate: it en-
courages invariances to long-term differences depending on
whether a change occurs or not. We call this novel loss
function Change-aware Contrastive Loss (or CaCo) .
The above loss function uses the temporal structure of
satellite images. We can also utilize geographical struc-
ture by carefully sampling the most informative locations
on the planet. We provide an improvement over a previ-
ously proposed geographical sampling [25]. We show that
sampling closer to cities, and ignoring samples completely
in the ocean can result in a dataset much more useful for
learning a general representation for various downstream
tasks. We evaluate our new representation on a diverse set
of downstream tasks such as landcover classification, se-
mantic segmentation, and change detection. Our method
achieves significant relative improvements (ranging from
6.5% to 8.5%) over the state-of-the-art for a variety tasks
such as segmentation and change detection.
To summarize, we make the following contributions:
• We propose a new self-supervised learning loss that
uses long-term temporal information in satellite im-
agery to encourage invariance to seasonal variations
but sensitivity to permanent, long-term changes.
• We introduce a novel approach to robustly estimate
whether a location has undergone significant change
by comparing long-term changes to seasonal varia-
tions. Our new change-aware loss function (CACo)
uses this to decide when to encourage invariance.
• We use an improved geographical sampling that pro-
vides more diverse data for representation learning.
|
Li_Referring_Image_Matting_CVPR_2023
|
Abstract
Different from conventional image matting, which either
requires user-defined scribbles/trimap to extract a specific
foreground object or directly extracts all the foreground ob-
jects in the image indiscriminately, we introduce a new task
named Referring Image Matting (RIM) in this paper, which
aims to extract the meticulous alpha matte of the specific
object that best matches the given natural language descrip-
tion, thus enabling a more natural and simpler instruction
for image matting. First, we establish a large-scale challeng-
ing dataset RefMatte by designing a comprehensive image
composition and expression generation engine to automat-
ically produce high-quality images along with diverse text
attributes based on public datasets. RefMatte consists of 230
object categories, 47,500 images, 118,749 expression-region
entities, and 474,996 expressions. Additionally, we construct
a real-world test set with 100 high-resolution natural im-
ages and manually annotate complex phrases to evaluate
the out-of-domain generalization abilities of RIM methods.
Furthermore, we present a novel baseline method CLIPMat
for RIM, including a context-embedded prompt, a text-driven
semantic pop-up, and a multi-level details extractor. Exten-
sive experiments on RefMatte in both keyword and expres-
sion settings validate the superiority of CLIPMat over repre-
sentative methods. We hope this work could provide novel
insights into image matting and encourage more follow-
up studies. The dataset, code and models are available at
https://github.com/JizhiziLi/RIM.
|
1. Introduction
Image matting refers to extracting the soft alpha matte
of the foreground in natural images, which is beneficial
for various downstream applications such as video confer-
ences, advertisement production, and e-Commerce promo-
tion [58]. Typical matting methods can be divided into two
groups: 1) the methods based on auxiliary inputs, e.g., scrib-
ble [17] and trimap [1,17], and 2) automatic matting methods
*Dr Jing Zhang and Ms Jizhizi Li were supported by Australian Research
Council Projects in part by FL170100117 and IH180100002.that can extract the foreground without any human interven-
tion [19,44]. However, the former are not applicable for fully
automatic scenarios, while the latter are limited to specific
categories, e.g., human [2, 32, 57], animal [19], or the salient
objects [40, 60]. It is still unexplored to carry out control-
lable image matting on arbitrary objects based on language
instructions, e.g., extracting the alpha matte of the specific
object that best matches the given language description.
Recently, language-driven tasks such as referring expres-
sion segmentation (RES) [55], referring image segmentation
(RIS) [12, 25, 54], visual question answering (VQA) [8],
and referring expression comprehension (REC) [31] have
been widely studied. Great progress in these areas has been
made based on many datasets like ReferIt [14], Google Ref-
Exp [34], RefCOCO [56], VGPhraseCut [50], and Cops-
Ref [3]. However, due to the limited resolution of avail-
able datasets, visual grounding methods are restricted to
the coarse segmentation level. Besides, most of the meth-
ods [13, 30] neglect pixel-level text-visual alignment and
cannot preserve sufficient details, making them difficult to
be used in scenarios that require meticulous alpha mattes.
To fill this gap, we propose a new task named Referring
Image Matting (RIM) , which refers to extracting the metic-
ulous high-quality alpha matte of the specific foreground
object that can best match the given natural language de-
scription from the image. Different from the conventional
matting methods, RIM is designed for controllable image
matting that can perform a more natural and simpler instruc-
tion to extract arbitrary objects. It is of practical significance
in industrial application domains and opens up a new re-
search direction To facilitate the study of RIM, we establish
the first dataset RefMatte , which consists of 230 object
categories, 47,500 images, and 118,749 expression-region
entities together with the corresponding high-quality alpha
mattes and 474,996 expressions. Specifically, to build up
RefMatte, we revisit a lot of prevalent public matting datasets
like AM-2k [19], P3M-10k [18], AIM-500 [20], SIM [45]
and manually label the category of each foreground object
(a.k.a. entity) carefully. We also adopt multiple off-the-shelf
deep learning models [27, 51] to generate various attributes
for each entity, e.g., gender, age, and clothes type of human.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22448
Figure 1. Some examples from our RefMatte test set (top) and the
results of CLIPMat given keyword and expression inputs (bottom).
Figure 2. Some examples from our RefMatte-RW100 test set (top)
and the results of CLIPMat given expression inputs (bottom), which
also show CLIPMat’s robustness to preserved privacy information.
Then, we design a comprehensive composition and expres-
sion generation engine to produce the synthetic images with
reasonable absolute and relative positions considering other
entities. Finally, we present several expression logic forms to
generate varying language descriptions with the use of rich
visual attributes. In addition, we propose a real-world test
set RefMatte-RW100 with 100 images containing diverse
objects and human-annotated expressions, which is used to
evaluate the generalization ability of RIM methods. Some
examples are shown in Figure 1 and Figure 2.
Since previous visual grounding methods are designed for
the segmentation-level tasks, directly applying them [13, 30,
43] to the RIM task cannot produce promising alpha mattes
with fine details. Here, we present CLIPMat, a novel baseline
method specifically designed for RIM. CLIPMat utilizes the
large-scale pre-trained CLIP [41] model as the text and visual
backbones, and the typical matting branches [18, 19] as the
decoders. An intuitive context-embedded prompt is adopted
to provide matting-related learnable features for the text en-
coder. To extract high-level visual semantic information for
the semantic branch, we pop up the visual semantic feature
through the guidance of the text output feature. Additionally,
as RIM requires much more visual details compared to the
segmentation task, we devise a module to extract multi-level
details by exploiting shallow-layer features and the original
input image, aiming to preserve the foreground details in the
matting branch. Figure 1 and Figure 2 show some promising
results of the proposed CLIPMat given different types of
language inputs, i.e., keywords and expressions.
Furthermore, to provide a fair and comprehensive evalua-tion of CLIPMat and relevant state-of-the-art methods, we
conduct extensive experiments on RefMatte under two differ-
ent settings, i.e., the keyword-based setting and expression-
based setting, depending on language descriptions’ forms.
Both the subjective and objective results have validated the
superiority of CLIPMat over representative methods. The
main contribution of this study is three-fold. 1) We de-
fine a new task named RIM, aiming to identify and extract
the alpha matte of the specific foreground object that best
matches the given natural language description. 2) We es-
tablish the first large-scale dataset RefMatte, consisting of
47,500 images and 118,749 expression-region entities with
high-quality alpha mattes and diverse expressions. 3) We
present a novel baseline method CLIPMat specifically de-
signed for RIM, which achieves promising results in two
different settings of RefMatte, also on real-world images.
|
Li_Neural_Video_Compression_With_Diverse_Contexts_CVPR_2023
|
Abstract
For any video codecs, the coding efficiency highly re-
lies on whether the current signal to be encoded can find
the relevant contexts from the previous reconstructed sig-
nals. Traditional codec has verified more contexts bring
substantial coding gain, but in a time-consuming manner.
However, for the emerging neural video codec (NVC), its
contexts are still limited, leading to low compression ra-
tio. To boost NVC, this paper proposes increasing the
context diversity in both temporal and spatial dimensions.
First, we guide the model to learn hierarchical quality pat-
terns across frames, which enriches long-term and yet high-
quality temporal contexts. Furthermore, to tap the poten-
tial of optical flow-based coding framework, we introduce a
group-based offset diversity where the cross-group interac-
tion is proposed for better context mining. In addition, this
paper also adopts a quadtree-based partition to increase
spatial context diversity when encoding the latent repre-
sentation in parallel. Experiments show that our codec
obtains 23.5% bitrate saving over previous SOTA NVC.
Better yet, our codec has surpassed the under-developing
next generation traditional codec/ECM in both RGB and
YUV420 colorspaces, in terms of PSNR. The codes are at
https://github.com/microsoft/DCVC .
|
1. Introduction
The philosophy of video codec is that, for the current
signal to be encoded, the codec will find the relevant con-
texts (e.g., various predictions as the contexts) from previ-
ous reconstructed signals to reduce the spatial-temporal re-
dundancy. The more relevant contexts are, the higher bitrate
saving is achieved.
If looking back the development of traditional codecs
(from H.261 [17] in 1988 to H.266 [7] in 2020), we find
that the coding gain mainly comes from the continuously
expanded coding modes, where each mode uses a specific
manner to extract and utilize context. For example, the
numbers of intra prediction directions [42] in H.264, H.265,
H.266 are 9, 35, and 65, respectively. So many modes can
DCVC -HEM
H.266:
VTM -17.0ECM -5.0H.265:
HM-16.25Bitrate
Decrease
Bitrate
IncreaseBitrate Comparison over H.266 in Terms of PSNR
-40%-30%-20%-10%0%10%20%
-38.5%-7.1%0.0%11.8%17.8%
-37.1%0.0%13.1%17.0%
Our DCVC -DC
RGB colorspace
YUV420 colorspaceFigure 1. Average results on UVG, MCL-JCV , and HEVC
datasets. All traditional codecs use their best compression ratio
configuration. DCVC-HEM [29] is the previous SOTA NVC and
only has released the model for RGB colorspace.
extract diverse contexts to reduce redundancy, but also bring
huge complexity as rate distortion optimization (RDO) is
used to search the best mode. For encoding a 1080p frame,
the under-developing ECM (the prototype of next genera-
tion traditional codec) needs up to half an hour [49]. Al-
though some DL-based methods [24,51,52] proposed accel-
erating traditional codecs, the complexity is still very high.
By contrast, neural video codec (NVC) changes the ex-
traction and utilization of context from hand-crafted de-
sign to automatic-learned manner. Mainstream frame-
works of NVC can be classified into residual coding-based
[1, 13, 31, 32, 34, 36, 47, 59, 61] and condition coding-based
[21, 27–29, 33, 38, 50]. The residual coding explicitly uses
the predicted frame as the context, and the context utiliza-
tion is restricted to use subtraction for redundancy removal.
By comparison, conditional coding implicitly learns feature
domain contexts. The high dimension contexts can carry
richer information to facilitate encoding, decoding, as well
as entropy modelling.
However, for most NVCs, the manners of context extrac-
tion and utilization are still limited, e.g., only using optical
flow to explore temporal correlation. This makes NVC eas-
ily suffer from the uncertainty [12, 16, 37] in parameters or
fall into local optimum [25]. One solution is adding tradi-
tional codec-like coding modes into NVC [25]. But it brings
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22616
large computational complexity as RDO is used. So the
question comes up: how to better learn and use the contexts
while yielding low computational cost?
To this end, based on DCVC (deep contextual video
compression) [28] framework and its following work
DCVC-HEM [29], we propose a new model DCVC-DC
which efficiently utilizes the Diverse Contexts to further
boost compression ratio. At first, we guide DCVC-DC to
learn hierarchical quality pattern across frames. With this
guidance during the training, the long-term and yet high-
quality contexts which are vital for the reconstruction of
the following frames are implicitly learned during the fea-
ture propagation. This helps further exploit the long-range
temporal correlation in video and effectively alleviate the
quality degradation problem existed in most NVCs. In ad-
dition, we adopt the offset diversity [8] to strengthen the
optical flow-based codec, where multiple offsets can reduce
the warping errors for complex or large motions. In particu-
lar, inspired by the weighted prediction in traditional codec,
the offsets are divided into groups and the cross-group fu-
sion is proposed to improve the temporal context mining.
Besides from temporal dimension, this paper also pro-
poses increasing the spatial context diversity when encod-
ing the latent representation. Based on recent checkerboard
model [19] and dual spatial model [29, 56], we design a
quadtree-based partition to improve the distribution estima-
tion. When compared with [19,29], the types of correlation
modelling are more diverse hence the model has a larger
chance to find more relevant context.
It is noted that all our designs are parallel-efficient. To
further reduce the computational cost, we also adopt depth-
wise separable convolution [10], and assign unequal chan-
nel numbers for features with different resolutions. Experi-
ments show that our DCVC-DC achieves much higher effi-
ciency over previous SOTA NVC and pushes the compres-
sion ratio to a new height. When compared with DCVC-
HEM [29], 23.5 % bitrate saving is achieved while MACs
(multiply–accumulate operations) are reduced by 19.4%.
Better yet, besides H.266-VTM 17.0, our codec also already
outperforms ECM-5.0 (its best compression ratio config-
uration for low delay coding is used) in both RGB and
YUV420 colorspaces, as shown in Fig. 1. To the best of
our knowledge, this is the first NVC which can achieve such
accomplishment. In summary, our contributions are:
• We propose efficiently increasing context diversity to
boost NVC. Diverse contexts are complementary to
each other and have larger chance to provide good ref-
erence for reducing redundancy.
• From temporal dimension, we guide model to extract
high-quality contexts to alleviate the quality degrada-
tion problem. In addition, the group-based offset di-
versity is designed for better temporal context mining.• From spatial dimension, we adopt a quadtree-based
partition for latent representation. This provides di-
verse spatial contexts for better entropy coding.
• Our DCVC-DC obtains 23.5% bitrate saving over the
previous SOTA NVC. In particular, our DCVC-DC has
surpassed the best traditional codec ECM in both RGB
and YUV420 colorspaces, which is an important mile-
stone in the development of NVC.
|
Pan_Cloud-Device_Collaborative_Adaptation_to_Continual_Changing_Environments_in_the_Real-World_CVPR_2023
|
Abstract
When facing changing environments in the real world,
the lightweight model on client devices suffers from se-
vere performance drops under distribution shifts. The main
limitations of the existing device model lie in (1) unable
to update due to the computation limit of the device, (2)
the limited generalization ability of the lightweight model.
Meanwhile, recent large models have shown strong gen-
eralization capability on the cloud while they can not be
deployed on client devices due to poor computation con-
straints. To enable the device model to deal with chang-
ing environments, we propose a new learning paradigm of
Cloud-Device Collaborative Continual Adaptation, which
encourages collaboration between cloud and device and im-
proves the generalization of the device model. Based on
this paradigm, we further propose an Uncertainty-based
Visual Prompt Adapted (U-VPA) teacher-student model to
transfer the generalization capability of the large model on
the cloud to the device model. Specifically, we first de-
*Equal contribution
†Corresponding authorsign the Uncertainty Guided Sampling (UGS) to screen out
challenging data continuously and transmit the most out-
of-distribution samples from the device to the cloud. Then
we propose a Visual Prompt Learning Strategy with Uncer-
tainty guided updating (VPLU) to specifically deal with the
selected samples with more distribution shifts. We trans-
mit the visual prompts to the device and concatenate them
with the incoming data to pull the device testing distribu-
tion closer to the cloud training distribution. We conduct
extensive experiments on two object detection datasets with
continually changing environments. Our proposed U-VPA
teacher-student framework outperforms previous state-of-
the-art test time adaptation and device-cloud collaboration
methods. The code and datasets will be released.
|
1. Introduction
Real-world usually contains various environmental
changes along with continual distribution shifts [35]. Peo-
ple usually deploy economically lightweight models on
devices to boost the scalability and practicability. The
lightweight model can suffer severe performance degrada-
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12157
tion under continual distribution shifts [25, 29, 33, 35]. The
main challenges are: (1)The poor computational ability of
devices. Due to the properties of device infrastructure, the
deployed model can not be updated on time, thus lagging its
performance for the real world under distribution shift. (2)
The limited generalization ability of the lightweight model.
Since lightweight models are of relatively small capacity,
they can not handle continually changing environments. In
contrast, recent large models that are trained on the cloud
server show significant generalization ability [2, 39]. While
in industry, these large models can not be directly applied
due to the limited infrastructure.
Therefore, we enable the device model to tackle real-
world environmental changing by proposing a Cloud-
Device Collaborative Continual Adaptation paradigm, as
shown in Fig .1 (a). Previous Cloud-Device Collaborative
methods only focus on improving model representation on
variance in video frames but neglect the model generaliza-
tion ability for continually changing data distribution. In
our new paradigm, we fully exploit the sufficient knowledge
of the large cloud model and transfer the continual general-
ization ability to the device lightweight model.
In particular, we design an Uncertainty-based Visual
Prompt Adapted (U-VPA) teacher-student model, which
consists of an Uncertainty Guided Sampling (UGS) strategy
and a Visual Prompt Learning Strategy with Uncertainty
Guided Updating (VPLU). Due to the communication band-
width constraint [11, 13, 19], different with some work tar-
geting filtering the parameters [22, 26], we design the UGS
to screen out the most environment-specific samples and de-
crease the required bandwidth compared with transmitting
the whole sequence. To leverage the strong generalization
ability of large models, we introduce the VPLU to align
source-target domain distribution and transfer the represen-
tation of the large teacher model to the light student model.
The light student model and visual prompts are then deliv-
ered to the device, thus alleviating the continuously chang-
ing scenarios in the real world.
Experimental results show that our method outperforms
the state-of-the-art methods on synthetic and real-world dis-
tribution shift datasets, as shown in Fig .1 (b). Besides, we
can achieve the same performance as the entire data with
fewer reflowed data (42% of total data). As another bene-
fit, fewer reflowed data reduce the bandwidth pressure of
the uplink. As for the downlink, we can deliver the vi-
sual prompts (0.43% of the model’s parameters) with al-
most negligible bandwidth to the device and apply the vi-
sual prompt to the input data to improve the performance of
the device model by 3.9% in mAP.
Our contributions can be summarized as follows:
• We make the first attempt to deal with continually
changing scenarios by proposing a Cloud-Device Col-
laborative Continual Adaptation paradigm, which aimsto transfer the generalization ability from the large
cloud model to the lightweight device model. Our
method is a general paradigm that can apply to real-
world systems.
• We design an Uncertainty-based Visual Prompt
Adapted (U-VPA) teacher-student model, which con-
sists of UGS and VPLU. We introduce UGS to screen
out the most environment-specific samples and de-
crease the required bandwidth compared with trans-
mitting the whole sequence. We propose a VPLU
to align source-target data distribution and transfer
the representation of the large teacher model to the
lightweight student model.
• Experiments show that our proposed framework and
method surpass other state-of-the-art methods and can
effectively improve the continuous domain adaptation
capability of the device model.
|
Li_On_the_Effectiveness_of_Partial_Variance_Reduction_in_Federated_Learning_CVPR_2023
|
AbstractData heterogeneity across clients is a key challenge infederated learning. Prior works address this by eitheraligning client and server models or using control vari-ates to correct client model drift. Although these methodsachieve fast convergence in convex or simple non-convexproblems, the performance in over-parameterized modelssuch as deep neural networks is lacking. In this paper,we first revisit the widely used FedAvg algorithm in a deepneural network to understand how data heterogeneity in-fluences the gradient updates across the neural networklayers. We observe that while the feature extraction lay-ers are learned efficiently by FedAvg, the substantial diver-sity of the final classification layers across clients impedesthe performance. Motivated by this, we propose to correctmodel drift by variance reduction only on the final layers.We demonstrate that this significantly outperforms existingbenchmarks at a similar or lower communication cost. Wefurthermore provide proof for the convergence rate of ouralgorithm.
|
1. IntroductionFederated learning (FL) is emerging as an essential dis-tributed learning paradigm in large-scale machine learning.Unlike in traditional machine learning, where a model istrained on the collected centralized data, in federated learn-ing, each client (e.g. phones and institutions) learns a modelwith its local data. A centralized model is then obtained byaggregating the updates from all participating clients with-out ever requesting the client data, thereby ensuring a cer-tain level of user privacy [13,17]. Such an algorithm is es-pecially beneficial for tasks where the data is sensitive,e.g.chemical hazards detection and diseases diagnosis [33].Two primary challenges in federated learning are i) han-⇤Work done while at CISPA‡CISPA Helmholtz Center for Information Security
.....12N
ServerFeature extractor Classifier Variance reductionClient
Figure 1. Our proposed FedPVR framework with the performance(communicated parameters per round client()server). Smaller↵corresponds to higher data heterogeneity. Our method achieves abetter speedup than existing approaches by transmitting a slightlylarger number of parameters than FedAvg.dling data heterogeneity across clients [13] and ii) lim-iting the cost of communication between the server andclients [10]. In this setting, FedAvg [17] is one of themost widely used schemes: A server broadcasts its model toclients, which then update the model using their local datain a series of steps before sending their individual model tothe server, where the models are aggregated by averagingthe parameters. The process is repeated for multiple com-munication rounds. While it has shown great success inmany applications, it tends to achieve subpar accuracy andconvergence when the data are heterogeneous [14,24,31].The slow and sometimes unstable convergence of Fe-dAvg can be caused by client drift [14] brought on by dataheterogeneity. Numerous efforts have been made to im-prove FedAvg’s performance in this setting. Prior worksattempt to mitigate client drift by penalizing the distancebetween a client model and the server model [20,31] orby performing variance reduction techniques while updat-ing client models [1,14,32]. These works demonstratefast convergence on convex problems or for simple neu-ral networks; however, their performance on deep neural
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3964
networks, which are state-of-the-art for many centralizedlearning tasks [11,34], has yet to be well explored. Adapt-ing techniques that perform well on convex problems toneural networks is non-trivial [7] due to their “intriguingproperties” [38] such as over-parametrization and permuta-tion symmetries.To overcome the above issues, we revisit the FedAvg al-gorithm with a deep neural network (VGG-11 [34]) underthe assumption of data heterogeneity and full client partici-pation. Specifically, we investigate which layers in a neuralnetwork are mostly influenced by data heterogeneity. Wedefinedrift diversity, which measures the diversity of the di-rections and scales of the averaged gradients across clientsper communication round. We observe that in the non-IIDscenario, the deeper layers, especially the final classificationlayer, have the highest diversity across clients compared toan IID setting. This indicates that FedAvg learns good fea-ture representations even in the non-IID scenario [5] andthat the significant variation of the deeper layers acrossclients is a primary cause of FedAvg’s subpar performance.Based on the above observations, we propose to alignthe classification layers across clients using variance reduc-tion. Specifically, we estimate the average updating direc-tion of the classifiers (the last several fully connected layers)at the clientciand server levelcand use their difference asa control variate [14] to reduce the variance of the classi-fiers across clients. We analyze our proposed algorithm andderive a convergence rate bound.We perform experiments on the popular federated learn-ing benchmark datasets CIFAR10 [19] and CIFAR100 [19]using two types of neural networks, VGG-11 [34] andResNet-8 [11], and different levels of data heterogene-ity across clients. We experimentally show that we re-quire fewer communication rounds compared to the exist-ing methods [14,17,31] to achieve the same accuracy whiletransmitting a similar or slightly larger number of param-eters between server and clients than FedAvg (see Fig.1).With a (large) fixed number of communication rounds, ourmethod achieves on-par or better top-1 accuracy, and insome settings it even outperforms centralized learning. Us-ing conformal prediction [3], we show how performancecan be improved further using adaptive prediction sets.We show that applying variance reduction on the last lay-ers increases the diversity of the feature extraction layers.This diversity in the feature extraction layers may give eachclient more freedom to learn richer feature representations,and the uniformity in the classifier then ensures a less biaseddecision. We summarize our contributions here:•We present our algorithm for partial variance-reducedfederated learning (FedPVR). We experimentallydemonstrate that the key to the success of our algo-rithm is the diversity between the feature extractionlayers and the alignment between the classifiers.•We prove the convergence rate in the convex set-tings and non-convex settings, precisely characterizeits weak dependence on data-heterogeneity measuresand show that FedPVR provably converges as fast asthe centralized SGD baseline in most practical relevantcases.•We experimentally show that our algorithm is morecommunication efficient than previous works acrossvarious levels of data heterogeneity, datasets, and neu-ral network architectures. In some cases where dataheterogeneity exists, the proposed algorithm even per-forms slightly better than centralized learning.
|
Lv_Improving_Generalization_With_Domain_Convex_Game_CVPR_2023
|
Abstract
Domain generalization (DG) tends to alleviate the poor
generalization capability of deep neural networks by learn-
ing model with multiple source domains. A classical solution
to DG is domain augmentation, the common belief of which
is that diversifying source domains will be conducive to the
out-of-distribution generalization. However, these claims
are understood intuitively, rather than mathematically. Our
explorations empirically reveal that the correlation between
model generalization and the diversity of domains may be not
strictly positive, which limits the effectiveness of domain aug-
mentation. This work therefore aim to guarantee and further
enhance the validity of this strand. To this end, we propose a
new perspective on DG that recasts it as a convex game be-
tween domains. We first encourage each diversified domain
to enhance model generalization by elaborately designing a
regularization term based on supermodularity. Meanwhile, a
sample filter is constructed to eliminate low-quality samples,
thereby avoiding the impact of potentially harmful informa-
tion. Our framework presents a new avenue for the formal
analysis of DG, heuristic analysis and extensive experiments
demonstrate the rationality and effectiveness.1
|
1. Introduction
Owning extraordinary representation learning ability,
deep neural networks (DNNs) have achieved remarkable
success on a variety of tasks when the training and test data
are drawn from the same distribution [9, 11, 16]. Whereas
for out-of-distribution data, DNNs have demonstrated poor
generalization capability since the i.i.d. assumption is vio-
lated, which is common in real-world conditions [27, 28, 42].
To tackle this issue, domain generalization (DG) has become
a propulsion technology, aiming to learn a robust model from
multiple source domains so that can generalize well to any
unseen target domains with different statistics [2, 19, 22, 30].
Among extensive solutions to improve generalization, do-
main augmentation [39, 46, 48, 56] has been a classical and
Corresponding author.
1Code is available at "https://github.com/BIT-DA/DCG".
12.5%N 25.0%N 37.5%N 50.0%N 62.5%N 75.0%N 87.5%N 100.0%N
Number of Augmented Domains7476788082Accuracy (%)
Ideal (a)
Ideal (b)
BASELINE
FACT(a) Cartoon.
12.5%N 25.0%N 37.5%N 50.0%N 62.5%N 75.0%N 87.5%N 100.0%N
Number of Augmented Domains6872768084Accuracy (%)
Ideal (a)
Ideal (b)
BASELINE
FACT (b) Sketch.
Figure 1. The relation between model generalization and domain
diversity with Cartoon and Sketch on PACS dataset as the un-
seen target domain, respectively. Nis the maximum number of
augmented domains. Note that the solid lines denote the actual re-
sults of a BASELINE method that combines DeepAll with Fourier
augmentation strategy and a SOTA domain augmentation method
FACT, while the dash lines represent the ideal relation in this work.
prevalent strategy, which focuses on exposing the model with
more diverse domains via some augmentation techniques.
A common belief is that generalizable models would be-
come easier to learn when the training distributions become
more diverse, which has been also emphasized by a recent
work [47]. Notwithstanding the promising results shown
by this strand of approaches, the claims above are vague
and lack of theoretical justification, formal analyses of the
relation between domain diversity and model generalization
are sparse. Further, the transfer of knowledge may even hurt
the performance on target domains in some cases, which is
referred to as negative transfer [33, 41]. Thus the relation of
domain diversity and model generalization remains unclear.
In light of these points, we begin by considering the question:
The stronger the domain diversity, will it certainly help
to improve the model generalization capability?
To explore this issue, we first quantify domain diversity
as the number of augmented domains. Then we conduct a
brief experiment using Fourier augmentation strategy [48] as
a classical and representative instance. The results presented
in Fig 1 show that with the increase of domain diversity, the
model generalization (measured by the accuracy on unseen
target domain) may not necessarily increase, but sometimes
decreases instead, as the solid lines show. On the one hand,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24315
this may be because the model does not best utilize the rich
information of diversified domains; on the other hand, it
may be due to the existence of low-quality samples which
contain redundant or noisy information that is unprofitable
to generalization [18]. This discovery indicates that there
is still room for improvement of the effectiveness of do-
main augmentation if we enable each domain to be certainly
conducive to model generalization as the dash lines in Fig 1.
In this work, we therefore aim to ensure the strictly posi-
tive correlation between model generalization and domain
diversity to guarantee and further enhance the effectiveness
of domain augmentation. To do this, we take inspiration
from the literature of convex game that requires each player
to bring profit to the coalition [4, 13, 40], which is consistent
to our key insight, i.e, make each domain bring benefit to
model generalization. Thus, we propose to formalize DG as
a convex game between domains. First, we design a novel
regularization term based on the supermodularity of convex
game. This regularization encourages each diversified do-
main to contribute to improving model generalization, thus
enables the model to better exploit the diverse information.
In the meawhile, considering that there may exist samples
with unprofitable or even harmful information to general-
ization, we further construct a sample filter based on the
proposed regularization to get rid of the low-quality samples
such as noisy or redundant ones, so that their deterioration
to model generalization can be avoided. We provide some
heuristic analyses and intuitive explanations about the mech-
anisms behind to demonstrate the rationality in Section 4.
Nevertheless, it is well known that the supermodularity
also indicates increasing marginal contribution, which may
not hold intuitively in DG, where the marginal contribution
of domains is generally decreasing. To mitigate the gap be-
tween theory and practice, we impose a constraint on the
naive supermodularity when construct our regularization
term. We constrain the regularization to work only in case
that the supermodularity is violated, i.e., when the marginal
contribution of domains decreases. Thus, the limit of our
regularization optimization is actually to achieve a constant
marginal contribution , rather than an impracticable increas-
ing marginal contribution . Hence, our regularization can
additionally regularize the decreasing speed of the marginal
contribution as slow as possible by optimizing towards the
constant marginal contribution , just like changing the line
Ideal (a) in Fig 1 into line Ideal (b) . Generally, the role of
our proposed supermodularity regularization is to encour-
age the contribution of each domain, and further relieve the
decreasing marginal contribution of domains to a certain
extent, so as to better utilize the diversified information.
Contributions. Our contributions in this work include:
(i) Exploring the relation of model generalization and source
domain diversity, which reveals the limit of previous domain
augmentation strand; (ii) Introducing convex game into DGto guarantee and further enhance the validity of domain
augmentation. The proposed framework encourages each
domain to conducive to generalization while avoiding the
negative impact of low-quality samples, enabling the model
to better utilize the information within diversified domains;
(iii) Providing heuristic analysis and intuitive explanations
about the rationality. The effectiveness and superiority are
verified empirically across extensive real-world datasets.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.