Unnamed: 0
int64
0
2.72k
title
stringlengths
14
153
Arxiv link
stringlengths
1
31
authors
stringlengths
5
1.5k
arxiv_id
float64
2k
2.41k
abstract
stringlengths
435
2.86k
Model
stringclasses
1 value
GitHub
stringclasses
1 value
Space
stringclasses
1 value
Dataset
stringclasses
1 value
id
int64
0
2.72k
2,200
MMM: Generative Masked Motion Model
http://arxiv.org/abs/2312.03596
Ekkasit Pinyoanuntapong, Pu Wang, Minwoo Lee, Chen Chen
2,312.03596
Recent advances in text-to-motion generation using diffusion and autoregressive models have shown promising results. However these models often suffer from a trade-off between real-time performance high fidelity and motion editability. To address this gap we introduce MMM a novel yet simple motion generation paradigm based on Masked Motion Model. MMM consists of two key components: (1) a motion tokenizer that transforms 3D human motion into a sequence of discrete tokens in latent space and (2) a conditional masked motion transformer that learns to predict randomly masked motion tokens conditioned on the pre-computed text tokens. By attending to motion and text tokens in all directions MMM explicitly captures inherent dependency among motion tokens and semantic mapping between motion and text tokens. During inference this allows parallel and iterative decoding of multiple motion tokens that are highly consistent with fine-grained text descriptions therefore simultaneously achieving high-fidelity and high-speed motion generation. In addition MMM has innate motion editability. By simply placing mask tokens in the place that needs editing MMM automatically fills the gaps while guaranteeing smooth transitions between editing and non-editing parts. Extensive experiments on the HumanML3D and KIT-ML datasets demonstrate that MMM surpasses current leading methods in generating high-quality motion (evidenced by superior FID scores of 0.08 and 0.429) while offering advanced editing features such as body-part modification motion in-betweening and the synthesis of long motion sequences. In addition MMM is two orders of magnitude faster on a single mid-range GPU than editable motion diffusion models. Our project page is available at https://exitudio.github.io/MMM-page/.
[]
[]
[]
[]
2,200
2,201
PEGASUS: Personalized Generative 3D Avatars with Composable Attributes
http://arxiv.org/abs/2402.10636
Hyunsoo Cha, Byungjun Kim, Hanbyul Joo
2,402.10636
We present PEGASUS a method for constructing a personalized generative 3D face avatar from monocular video sources. Our generative 3D avatar enables disentangled controls to selectively alter the facial attributes (e.g. hair or nose) while preserving the identity. Our approach consists of two stages: synthetic database generation and constructing a personalized generative avatar. We generate a synthetic video collection of the target identity with varying facial attributes where the videos are synthesized by borrowing the attributes from monocular videos of diverse identities. Then we build a person-specific generative 3D avatar that can modify its attributes continuously while preserving its identity. Through extensive experiments we demonstrate that our method of generating a synthetic database and creating a 3D generative avatar is the most effective in preserving identity while achieving high realism. Subsequently we introduce a zero-shot approach to achieve the same goal of generative modeling more efficiently by leveraging a previously constructed personalized generative model.
[]
[]
[]
[]
2,201
2,202
LMDrive: Closed-Loop End-to-End Driving with Large Language Models
http://arxiv.org/abs/2312.07488
Hao Shao, Yuxuan Hu, Letian Wang, Guanglu Song, Steven L. Waslander, Yu Liu, Hongsheng Li
2,312.07488
Despite significant recent progress in the field of autonomous driving modern methods still struggle and can incur serious accidents when encountering long-tail unforeseen events and challenging urban scenarios. On the one hand large language models (LLM) have shown impressive reasoning capabilities that approach "Artificial General Intelligence". On the other hand previous autonomous driving methods tend to rely on limited-format inputs (e.g. sensor data and navigation waypoints) restricting the vehicle's ability to understand language information and interact with humans. To this end this paper introduces LMDrive a novel language-guided end-to-end closed-loop autonomous driving framework. LMDrive uniquely processes and integrates multi-modal sensor data with natural language instructions enabling interaction with humans and navigation software in realistic instructional settings. To facilitate further research in language-based closed-loop autonomous driving we also publicly release the corresponding dataset which includes approximately 64K instruction-following data clips and the LangAuto benchmark that tests the system's ability to handle complex instructions and challenging driving scenarios. Extensive closed-loop experiments are conducted to demonstrate LMDrive's effectiveness. To the best of our knowledge we're the very first work to leverage LLMs for closed-loop end-to-end autonomous driving. Code is available at https://github.com/opendilab/LMDrive
[]
[]
[]
[]
2,202
2,203
MCD: Diverse Large-Scale Multi-Campus Dataset for Robot Perception
http://arxiv.org/abs/2403.11496
Thien-Minh Nguyen, Shenghai Yuan, Thien Hoang Nguyen, Pengyu Yin, Haozhi Cao, Lihua Xie, Maciej Wozniak, Patric Jensfelt, Marko Thiel, Justin Ziegenbein, Noel Blunder
2,403.11496
Perception plays a crucial role in various robot applications. However existing well-annotated datasets are biased towards autonomous driving scenarios while unlabelled SLAM datasets are quickly over-fitted and often lack environment and domain variations. To expand the frontier of these fields we introduce a comprehensive dataset named MCD (Multi-Campus Dataset) featuring a wide range of sensing modalities high-accuracy ground truth and diverse challenging environments across three Eurasian university campuses. MCD comprises both CCS (Classical Cylindrical Spinning) and NRE (Non-Repetitive Epicyclic) lidars high-quality IMUs (Inertial Measurement Units) cameras and UWB (Ultra-WideBand) sensors. Furthermore in a pioneering effort we introduce semantic annotations of 29 classes over 59k sparse NRE lidar scans across three domains thus providing a novel challenge to existing semantic segmentation research upon this largely unexplored lidar modality. Finally we propose for the first time to the best of our knowledge continuous-time ground truth based on optimization-based registration of lidar-inertial data on large survey-grade prior maps which are also publicly released each several times the size of existing ones. We conduct a rigorous evaluation of numerous state-of-the-art algorithms on MCD report their performance and highlight the challenges awaiting solutions from the research community.
[]
[]
[]
[]
2,203
2,204
Diff-Plugin: Revitalizing Details for Diffusion-based Low-level Tasks
Yuhao Liu, Zhanghan Ke, Fang Liu, Nanxuan Zhao, Rynson W.H. Lau
null
Diffusion models trained on large-scale datasets have achieved remarkable progress in image synthesis. However due to the randomness in the diffusion process they often struggle with handling diverse low-level tasks that require details preservation. To overcome this limitation we present a new Diff-Plugin framework to enable a single pre-trained diffusion model to generate high-fidelity results across a variety of low-level tasks. Specifically we first propose a lightweight Task-Plugin module with a dual branch design to provide task-specific priors guiding the diffusion process in preserving image content. We then propose a Plugin-Selector that can automatically select different Task-Plugins based on the text instruction allowing users to edit images by indicating multiple low-level tasks with natural language. We conduct extensive experiments on 8 low-level vision tasks. The results demonstrate the superiority of Diff-Plugin over existing methods particularly in real-world scenarios. Our ablations further validate that Diff-Plugin is stable schedulable and supports robust training across different dataset sizes.
[]
[]
[]
[]
2,204
2,205
AHIVE: Anatomy-aware Hierarchical Vision Encoding for Interactive Radiology Report Retrieval
Sixing Yan, William K. Cheung, Ivor W. Tsang, Keith Chiu, Terence M. Tong, Ka Chun Cheung, Simon See
null
Automatic radiology report generation using deep learning models has been recently explored and found promising. Neural decoders are commonly used for the report generation where irrelevant and unfaithful contents are unavoidable. The retrieval-based approach alleviates the limitation by identifying reports which are relevant to the input to assist the generation. To achieve clinically accurate report retrieval we make reference to clinicians' diagnostic steps of examining a radiology image where anatomical and diagnostic details are typically focused and propose a novel hierarchical visual concept representation called anatomy-aware hierarchical vision encoding (AHIVE). To learn AHIVE we first derive a methodology to extract hierarchical diagnostic descriptions from radiology reports and develop a CLIP-based framework for the model training. Also the hierarchical architecture of AHIVE is designed to support interactive report retrieval so that report revision made at one layer can be propagated to the subsequent ones to trigger other necessary revisions. We conduct extensive experiments and show that AHIVE can outperform the SOTA vision-language retrieval methods in terms of clinical accuracy by a large margin. We provide also a case study to illustrate how it enables interactive report retrieval.
[]
[]
[]
[]
2,205
2,206
CyberDemo: Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation
http://arxiv.org/abs/2402.14795
Jun Wang, Yuzhe Qin, Kaiming Kuang, Yigit Korkmaz, Akhilan Gurumoorthy, Hao Su, Xiaolong Wang
2,402.14795
We introduce CyberDemo a novel approach to robotic imitation learning that leverages simulated human demonstrations for real-world tasks. By incorporating extensive data augmentation in a simulated environment CyberDemo outperforms traditional in-domain real-world demonstrations when transferred to the real world handling diverse physical and visual conditions. Regardless of its affordability and convenience in data collection CyberDemo outperforms baseline methods in terms of success rates across various tasks and exhibits generalizability with previously unseen objects. For example it can rotate novel tetra-valve and penta-valve despite human demonstrations only involving tri-valves. Our research demonstrates the significant potential of simulated human demonstrations for real world dexterous manipulation tasks. More details can be found at https://cyber-demo.github.io/
[]
[]
[]
[]
2,206
2,207
MaskCLR: Attention-Guided Contrastive Learning for Robust Action Representation Learning
Mohamed Abdelfattah, Mariam Hassan, Alexandre Alahi
null
Current transformer-based skeletal action recognition models tend to focus on a limited set of joints and low-level motion patterns to predict action classes. This results in significant performance degradation under small skeleton perturbations or changing the pose estimator between training and testing. In this work we introduce MaskCLR a new Masked Contrastive Learning approach for Robust skeletal action recognition. We propose an Attention-Guided Probabilistic Masking strategy to occlude the most important joints and encourage the model to explore a larger set of discriminative joints. Furthermore we propose a Multi-Level Contrastive Learning paradigm to enforce the representations of standard and occluded skeletons to be class-discriminative i.e. more compact within each class and more dispersed across different classes. Our approach helps the model capture the high-level action semantics instead of low-level joint variations and can be conveniently incorporated into transformer-based models. Without loss of generality we combine MaskCLR with three transformer backbones: the vanilla transformer DSTFormer and STTFormer. Extensive experiments on NTU60 NTU120 and Kinetics400 show that MaskCLR consistently outperforms previous state-of-the-art methods on standard and perturbed skeletons from different pose estimators showing improved accuracy generalization and robustness. Project website: https://maskclr.github.io.
[]
[]
[]
[]
2,207
2,208
Narrative Action Evaluation with Prompt-Guided Multimodal Interaction
http://arxiv.org/abs/2404.14471
Shiyi Zhang, Sule Bai, Guangyi Chen, Lei Chen, Jiwen Lu, Junle Wang, Yansong Tang
2,404.14471
In this paper we investigate a new problem called narrative action evaluation (NAE). NAE aims to generate professional commentary that evaluates the execution of an action. Unlike traditional tasks such as score-based action quality assessment and video captioning involving superficial sentences NAE focuses on creating detailed narratives in natural language. These narratives provide intricate descriptions of actions along with objective evaluations. NAE is a more challenging task because it requires both narrative flexibility and evaluation rigor. One existing possible solution is to use multi-task learning where narrative language and evaluative information are predicted separately. However this approach results in reduced performance for individual tasks because of variations between tasks and differences in modality between language information and evaluation information. To address this we propose a prompt-guided multimodal interaction framework. This framework utilizes a pair of transformers to facilitate the interaction between different modalities of information. It also uses prompts to transform the score regression task into a video-text matching task thus enabling task interactivity. To support further research in this field we re-annotate the MTL-AQA and FineGym datasets with high-quality and comprehensive action narration. Additionally we establish benchmarks for NAE. Extensive experiment results prove that our method outperforms separate learning methods and naive multi-task learning methods. Data and code will be released at https://github.com/shiyi-zh0408/NAE_CVPR2024.
[]
[]
[]
[]
2,208
2,209
R-Cyclic Diffuser: Reductive and Cyclic Latent Diffusion for 3D Clothed Human Digitalization
Kennard Yanting Chan, Fayao Liu, Guosheng Lin, Chuan Sheng Foo, Weisi Lin
null
Recently the authors of Zero-1-to-3 demonstrated that a latent diffusion model pretrained with Internet-scale data can not only address the single-view 3D object reconstruction task but can even attain SOTA results in it. However when applied to the task of single-view 3D clothed human reconstruction Zero-1-to-3 (and related models) are unable to compete with the corresponding SOTA methods in this field despite being trained on clothed human data. In this work we aim to tailor Zero-1-to-3's approach to the single-view 3D clothed human reconstruction task in a much more principled and structured manner. To this end we propose R-Cyclic Diffuser a framework that adapts Zero-1-to-3's novel approach to clothed human data by fusing it with a pixel-aligned implicit model. R-Cyclic Diffuser offers a total of three new contributions. The first and primary contribution is R-Cyclic Diffuser's cyclical conditioning mechanism for novel view synthesis. This mechanism directly addresses the view inconsistency problem faced by Zero-1-to-3 and related models. Secondly we further enhance this mechanism with two key features - Lateral Inversion Constraint and Cyclic Noise Selection. Both features are designed to regularize and restrict the randomness of outputs generated by a latent diffusion model. Thirdly we show how SMPL-X body priors can be incorporated in a latent diffusion model such that novel views of clothed human bodies can be generated much more accurately. Our experiments show that R-Cyclic Diffuser is able to outperform current SOTA methods in single-view 3D clothed human reconstruction both qualitatively and quantitatively. Our code is made publicly available at https://github.com/kcyt/r-cyclic-diffuser.
[]
[]
[]
[]
2,209
2,210
Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models
Chang Liu, Haoning Wu, Yujie Zhong, Xiaoyun Zhang, Yanfeng Wang, Weidi Xie
null
Generative models have recently exhibited exceptional capabilities in text-to-image generation but still struggle to generate image sequences coherently. In this work we focus on a novel yet challenging task of generating a coherent image sequence based on a given storyline denoted as open-ended visual storytelling. We make the following three contributions: (i) to fulfill the task of visual storytelling we propose a learning-based auto-regressive image generation model termed as StoryGen with a novel vision-language context module that enables to generate the current frame by conditioning on the corresponding text prompt and preceding image-caption pairs; (ii) to address the data shortage of visual storytelling we collect paired image-text sequences by sourcing from online videos and open-source E-books establishing processing pipeline for constructing a large-scale dataset with diverse characters storylines and artistic styles named StorySalon; (iii) Quantitative experiments and human evaluations have validated the superiority of our StoryGen where we show it can generalize to unseen characters without any optimization and generate image sequences with coherent content and consistent character. Code dataset and models are available at https://haoningwu3639.github.io/StoryGen_Webpage/
[]
[]
[]
[]
2,210
2,211
Validating Privacy-Preserving Face Recognition under a Minimum Assumption
Hui Zhang, Xingbo Dong, YenLung Lai, Ying Zhou, Xiaoyan Zhang, Xingguo Lv, Zhe Jin, Xuejun Li
null
The widespread use of cloud-based face recognition technology raises privacy concerns as unauthorized access to face images can expose personal information or be exploited for fraudulent purposes. In response privacy-preserving face recognition (PPFR) schemes have emerged to hide visual information and thwart unauthorized access. However the validation methods employed by these schemes often rely on unrealistic assumptions leaving doubts about their true effectiveness in safeguarding facial privacy. In this paper we introduce a new approach to privacy validation called Minimum Assumption Privacy Protection Validation (Map^2V). This is the first exploration of formulating a privacy validation method utilizing deep image priors and zeroth-order gradient estimation with the potential to serve as a general framework for PPFR evaluation. Building upon Map^2V we comprehensively validate the privacy-preserving capability of PPFRs through a combination of human and machine vision. The experiment results and analysis demonstrate the effectiveness and generalizability of the proposed Map^2V showcasing its superiority over native privacy validation methods from PPFR works of literature. Additionally this work exposes privacy vulnerabilities in evaluated state-of-the-art PPFR schemes laying the foundation for the subsequent effective proposal of countermeasures. The source code is available at https://github.com/Beauty9882/MAP2V.
[]
[]
[]
[]
2,211
2,212
Long-Tailed Anomaly Detection with Learnable Class Names
http://arxiv.org/abs/2403.20236
Chih-Hui Ho, Kuan-Chuan Peng, Nuno Vasconcelos
2,403.20236
Anomaly detection (AD) aims to identify defective images and localize their defects (if any). Ideally AD models should be able to detect defects over many image classes; without relying on hard-coded class names that can be uninformative or inconsistent across datasets; learn without anomaly supervision; and be robust to the long-tailed distributions of real-world applications. To address these challenges we formulate the problem of long-tailed AD by introducing several datasets with different levels of class imbalance and metrics for performance evaluation. We then propose a novel method LTAD to detect defects from multiple and long-tailed classes without relying on dataset class names. LTAD combines AD by reconstruction and semantic AD modules. AD by reconstruction is implemented with a transformer-based reconstruction module. Semantic AD is implemented with a binary classifier which relies on learned pseudo class names and a pretrained foundation model. These modules are learned over two phases. Phase 1 learns the pseudo-class names and a variational autoencoder (VAE) for feature synthesis that augments the training data to combat long-tails. Phase 2 then learns the parameters of the reconstruction and classification modules of LTAD. Extensive experiments using the proposed long-tailed datasets show that LTAD substantially outperforms the state-of-the-art methods for most forms of dataset imbalance. The long-tailed dataset split is available at https://zenodo.org/records/10854201
[]
[]
[]
[]
2,212
2,213
ArGue: Attribute-Guided Prompt Tuning for Vision-Language Models
http://arxiv.org/abs/2311.16494
Xinyu Tian, Shu Zou, Zhaoyuan Yang, Jing Zhang
2,311.16494
Although soft prompt tuning is effective in efficiently adapting Vision-Language (V&L) models for downstream tasks it shows limitations in dealing with distribution shifts. We address this issue with Attribute-Guided Prompt Tuning (ArGue) making three key contributions. 1) In contrast to the conventional approach of directly appending soft prompts preceding class names we align the model with primitive visual attributes generated by Large Language Models (LLMs). We posit that a model's ability to express high confidence in these attributes signifies its capacity to discern the correct class rationales. 2) We introduce attribute sampling to eliminate disadvantageous attributes thus only semantically meaningful attributes are preserved. 3) We propose negative prompting explicitly enumerating class-agnostic attributes to activate spurious correlations and encourage the model to generate highly orthogonal probability distributions in relation to these negative features. In experiments our method significantly outperforms current state-of-the-art prompt tuning methods on both novel class prediction and out-of-distribution generalization tasks.
[]
[]
[]
[]
2,213
2,214
Rapid 3D Model Generation with Intuitive 3D Input
Tianrun Chen, Chaotao Ding, Shangzhan Zhang, Chunan Yu, Ying Zang, Zejian Li, Sida Peng, Lingyun Sun
null
With the emergence of AR/VR 3D models are in tremendous demand. However conventional 3D modeling with Computer-Aided Design software requires much expertise and is difficult for novice users. We find that AR/VR devices in addition to serving as effective display mediums can offer a promising potential as an intuitive 3D model creation tool especially with the assistance of AI generative models. Here we propose Deep3DVRSketch the first 3D model generation network that inputs 3D VR sketches from novice users and generates highly consistent 3D models in multiple categories within seconds irrespective of the users' drawing abilities. We also contribute KO3D+ the largest 3D sketch-shape dataset. Our method pre-trains a conditional diffusion model on quality 3D data then fine-tunes an encoder to map 3D sketches onto the generator's manifold using an adaptive curriculum strategy for limited ground truths. In our experiment our approach achieves state-of-the-art performance in both model quality and fidelity with real-world input from novice users and users can even draw and obtain very detailed geometric structures. In our user study users were able to complete the 3D modeling tasks over 10 times faster using our approach compared to conventional CAD software tools. We believe that our Deep3DVRSketch and KO3D+ dataset can offer a promising solution for future 3D modeling in metaverse era. Check the project page at http://research.kokoni3d.com/Deep3DVRSketch.
[]
[]
[]
[]
2,214
2,215
GenTron: Diffusion Transformers for Image and Video Generation
Shoufa Chen, Mengmeng Xu, Jiawei Ren, Yuren Cong, Sen He, Yanping Xie, Animesh Sinha, Ping Luo, Tao Xiang, Juan-Manuel Perez-Rua
null
In this study we explore Transformer based diffusion models for image and video generation. Despite the dominance of Transformer architectures in various fields due to their flexibility and scalability the visual generative domain primarily utilizes CNN-based U-Net architectures particularly in diffusion-based models. We introduce GenTron a family of Generative models employing Transformer-based diffusion to address this gap. Our initial step was to adapt Diffusion Transformers (DiTs) from class to text conditioning a process involving thorough empirical exploration of the conditioning mechanism. We then scale GenTron from approximately 900M to over 3B parameters observing improvements in visual quality. Furthermore we extend GenTron to text-to-video generation incorporating novel motion-free guidance to enhance video quality. In human evaluations against SDXL GenTron achieves a 51.1% win rate in visual quality (with a 19.8% draw rate) and a 42.3% win rate in text alignment (with a 42.9% draw rate). GenTron notably performs well in T2I-CompBench highlighting its compositional generation ability. We hope GenTron could provide meaningful insights and serve as a valuable reference for future research. Please refer to the arXiv version for the most up-to-date results: https://arxiv.org/abs/2312.04557.
[]
[]
[]
[]
2,215
2,216
Close Imitation of Expert Retouching for Black-and-White Photography
Seunghyun Shin, Jisu Shin, Jihwan Bae, Inwook Shim, Hae-Gon Jeon
null
Since the widespread availability of cameras black-and-white (BW)photography has been a popular choice for artistic and aesthetic expression. It highlights the main subject in varying tones of gray creating various effects such as drama and contrast. However producing BW photography often demands high-end cameras or photographic editing from experts. Even the experts prefer different styles depending on the subject or even the same subject when taking grayscale photos or converting color images to BW. It is thus questionable which approach is better. To imitate the artistic values of decolorized images this paper introduces a deep metric learning framework with a novel subject-style specified proxy and a large-scale BW dataset. Our proxy-based decolorization utilizes a hierarchical proxy-based loss and a hierarchical bilateral grid network to mimic the experts' retouching scheme. The proxy-based loss captures both expert-discriminative and classsharing characteristics while the hierarchical bilateral grid network enables imitating spatially-variant retouching by considering both global and local scene contexts. Our dataset including color and BW images edited by three experts demonstrates the scalability of our method which can be further enhanced by constructing additional proxies from any set of BW photos like Internet downloaded figures. Our Experiments show that our framework successfully produce visually-pleasing BW images from color ones as evaluated by user preference with respect to artistry and aesthetics.
[]
[]
[]
[]
2,216
2,217
TRIP: Temporal Residual Learning with Image Noise Prior for Image-to-Video Diffusion Models
http://arxiv.org/abs/2403.17005
Zhongwei Zhang, Fuchen Long, Yingwei Pan, Zhaofan Qiu, Ting Yao, Yang Cao, Tao Mei
2,403.17005
Recent advances in text-to-video generation have demonstrated the utility of powerful diffusion models. Nevertheless the problem is not trivial when shaping diffusion models to animate static image (i.e. image-to-video generation). The difficulty originates from the aspect that the diffusion process of subsequent animated frames should not only preserve the faithful alignment with the given image but also pursue temporal coherence among adjacent frames. To alleviate this we present TRIP a new recipe of image-to-video diffusion paradigm that pivots on image noise prior derived from static image to jointly trigger inter-frame relational reasoning and ease the coherent temporal modeling via temporal residual learning. Technically the image noise prior is first attained through one-step backward diffusion process based on both static image and noised video latent codes. Next TRIP executes a residual-like dual-path scheme for noise prediction: 1) a shortcut path that directly takes image noise prior as the reference noise of each frame to amplify the alignment between the first frame and subsequent frames; 2) a residual path that employs 3D-UNet over noised video and static image latent codes to enable inter-frame relational reasoning thereby easing the learning of the residual noise for each frame. Furthermore both reference and residual noise of each frame are dynamically merged via attention mechanism for final video generation. Extensive experiments on WebVid-10M DTDB and MSR-VTT datasets demonstrate the effectiveness of our TRIP for image-to-video generation. Please see our project page at https://trip-i2v.github.io/TRIP/.
[]
[]
[]
[]
2,217
2,218
TexVocab: Texture Vocabulary-conditioned Human Avatars
http://arxiv.org/abs/2404.00524
Yuxiao Liu, Zhe Li, Yebin Liu, Haoqian Wang
2,404.00524
To adequately utilize the available image evidence in multi-view video-based avatar modeling we propose TexVocab a novel avatar representation that constructs a texture vocabulary and associates body poses with texture maps for animation. Given multi-view RGB videos our method initially back-projects all the available images in the training videos to the posed SMPL surface producing texture maps in the SMPL UV domain. Then we construct pairs of human poses and texture maps to establish a texture vocabulary for encoding dynamic human appearances under various poses. Unlike the commonly used joint-wise manner we further design a body-part-wise encoding strategy to learn the structural effects of the kinematic chain. Given a driving pose we query the pose feature hierarchically by decomposing the pose vector into several body parts and interpolating the texture features for synthesizing fine-grained human dynamics. Overall our method is able to create animatable human avatars with detailed and dynamic appearances from RGB videos and the experiments show that our method outperforms state-of-the-art approaches.
[]
[]
[]
[]
2,218
2,219
KITRO: Refining Human Mesh by 2D Clues and Kinematic-tree Rotation
http://arxiv.org/abs/2405.19833
Fengyuan Yang, Kerui Gu, Angela Yao
2,405.19833
2D keypoints are commonly used as an additional cue to refine estimated 3D human meshes. Current methods optimize the pose and shape parameters with a reprojection loss on the provided 2D keypoints. Such an approach while simple and intuitive has limited effectiveness because the optimal solution is hard to find in ambiguous parameter space and may sacrifice depth. Additionally divergent gradients from distal joints complicate and deviate the refinement of proximal joints in the kinematic chain. To address these we introduce Kinematic-Tree Rotation (KITRO) a novel mesh refinement strategy that explicitly models depth and human kinematic-tree structure. KITRO treats refinement from a bone-wise perspective. Unlike previous methods which perform gradient-based optimizations our method calculates bone directions in closed form. By accounting for the 2D pose bone length and parent joint's depth the calculation results in two possible directions for each child joint. We then use a decision tree to trace binary choices for all bones along the human skeleton's kinematic-tree to select the most probable hypothesis. Our experiments across various datasets and baseline models demonstrate that KITRO significantly improves 3D joint estimation accuracy and achieves an ideal 2D fit simultaneously. Our code available at: https://github.com/MartaYang/KITRO.
[]
[]
[]
[]
2,219
2,220
BoQ: A Place is Worth a Bag of Learnable Queries
Amar Ali-bey, Brahim Chaib-draa, Philippe Giguère
null
In visual place recognition accurately identifying and matching images of locations under varying environmental conditions and viewpoints remains a significant challenge. In this paper we introduce a new technique called Bag-of-Queries (BoQ) which learns a set of global queries designed to capture universal place-specific attributes. Unlike existing techniques that employ self-attention and generate the queries directly from the input BoQ employ distinct learnable global queries which probe the input features via cross-attention ensuring consistent information aggregation. In addition this technique provides an interpretable attention mechanism and integrates with both CNN and Vision Transformer backbones. The performance of BoQ is demonstrated through extensive experiments on 14 large-scale benchmarks. It consistently outperforms current state-of-the-art techniques including NetVLAD MixVPR and EigenPlaces. Moreover despite being a global retrieval technique (one-stage) BoQ surpasses two-stage retrieval methods such as Patch-NetVLAD TransVPR and R2Former all while being orders of magnitude faster and more efficient. The code and model weights are publicly available at https://github.com/amaralibey/Bag-of-Queries.
[]
[]
[]
[]
2,220
2,221
SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
Antoine Guédon, Vincent Lepetit
null
We propose a method to allow precise and extremely fast mesh extraction from 3D Gaussian Splatting. Gaussian Splatting has recently become very popular as it yields realistic rendering while being significantly faster to train than NeRFs. It is however challenging to extract a mesh from the millions of tiny 3D Gaussians as these Gaussians tend to be unorganized after optimization and no method has been proposed so far. Our first key contribution is a regularization term that encourages the Gaussians to align well with the surface of the scene. We then introduce a method that exploits this alignment to extract a mesh from the Gaussians using Poisson reconstruction which is fast scalable and preserves details in contrast to the Marching Cubes algorithm usually applied to extract meshes from Neural SDFs. Finally we introduce an optional refinement strategy that binds Gaussians to the surface of the mesh and jointly optimizes these Gaussians and the mesh through Gaussian splatting rendering. This enables easy editing sculpting animating and relighting of the Gaussians by manipulating the mesh instead of the Gaussians themselves. Retrieving such an editable mesh for realistic rendering is done within minutes with our method compared to hours with the state-of-the-art method on SDFs while providing a better rendering quality.
[]
[]
[]
[]
2,221
2,222
Understanding and Improving Source-free Domain Adaptation from a Theoretical Perspective
Yu Mitsuzumi, Akisato Kimura, Hisashi Kashima
null
Source-free Domain Adaptation (SFDA) is an emerging and challenging research area that addresses the problem of unsupervised domain adaptation (UDA) without source data. Though numerous successful methods have been proposed for SFDA a theoretical understanding of why these methods work well is still absent. In this paper we shed light on the theoretical perspective of existing SFDA methods. Specifically we find that SFDA loss functions comprising discriminability and diversity losses work in the same way as the training objective in the theory of self-training based on the expansion assumption which shows the existence of the target error bound. This finding brings two novel insights that enable us to build an improved SFDA method comprising 1) Model Training with Auto-Adjusting Diversity Constraint and 2) Augmentation Training with Teacher-Student Framework yielding a better recognition performance. Extensive experiments on three benchmark datasets demonstrate the validity of the theoretical analysis and our method.
[]
[]
[]
[]
2,222
2,223
Learning SO(3)-Invariant Semantic Correspondence via Local Shape Transform
Chunghyun Park, Seungwook Kim, Jaesik Park, Minsu Cho
null
Establishing accurate 3D correspondences between shapes stands as a pivotal challenge with profound implications for computer vision and robotics. However existing self-supervised methods for this problem assume perfect input shape alignment restricting their real-world applicability. In this work we introduce a novel self-supervised Rotation-Invariant 3D correspondence learner with Local Shape Transform dubbed RIST that learns to establish dense correspondences between shapes even under challenging intra-class variations and arbitrary orientations. Specifically RIST learns to dynamically formulate an SO(3)-invariant local shape transform for each point which maps the SO(3)-equivariant global shape descriptor of the input shape to a local shape descriptor. These local shape descriptors are provided as inputs to our decoder to facilitate point cloud self- and cross-reconstruction. Our proposed self-supervised training pipeline encourages semantically corresponding points from different shapes to be mapped to similar local shape descriptors enabling RIST to establish dense point-wise correspondences. RIST demonstrates state-of-the-art performances on 3D part label transfer and semantic keypoint transfer given arbitrarily rotated point cloud pairs outperforming existing methods by significant margins.
[]
[]
[]
[]
2,223
2,224
GigaPose: Fast and Robust Novel Object Pose Estimation via One Correspondence
http://arxiv.org/abs/2311.14155
Van Nguyen Nguyen, Thibault Groueix, Mathieu Salzmann, Vincent Lepetit
2,311.14155
We present GigaPose a fast robust and accurate method for CAD-based novel object pose estimation in RGB images. GigaPose first leverages discriminative "templates" rendered images of the CAD models to recover the out-of-plane rotation and then uses patch correspondences to estimate the four remaining parameters. Our approach samples templates in only a two-degrees-of-freedom space instead of the usual three and matches the input image to the templates using fast nearest-neighbor search in feature space results in a speedup factor of 35x compared to the state of the art. Moreover GigaPose is significantly more robust to segmentation errors. Our extensive evaluation on the seven core datasets of the BOP challenge demonstrates that it achieves state-of-the-art accuracy and can be seamlessly integrated with existing refinement methods. Additionally we show the potential of GigaPose with 3D models predicted by recent work on 3D reconstruction from a single image relaxing the need for CAD models and making 6D pose object estimation much more convenient. Our source code and trained models are publicly available at https://github.com/nv-nguyen/gigaPose
[]
[]
[]
[]
2,224
2,225
Imagine Before Go: Self-Supervised Generative Map for Object Goal Navigation
Sixian Zhang, Xinyao Yu, Xinhang Song, Xiaohan Wang, Shuqiang Jiang
null
The Object Goal navigation (ObjectNav) task requires the agent to navigate to a specified target in an unseen environment. Since the environment layout is unknown the agent needs to infer the unknown contextual objects from partially observations thereby deducing the likely location of the target. Previous end-to-end RL methods capture contextual relationships through implicit representations while they lack notion of geometry. Alternatively modular methods construct local maps for recording the observed geometric structure of unseen environment however lacking the reasoning of contextual relation limits the exploration efficiency. In this work we propose the self-supervised generative map (SGM) a modular method that learns the explicit context relation via self-supervised learning. The SGM is trained to leverage both episodic observations and general knowledge to reconstruct the masked pixels of a cropped global map. During navigation the agent maintains an incomplete local semantic map meanwhile the unknown regions of the local map are generated by the pre-trained SGM. Based on the generated map the agent sets the predicted location of the target as the goal and moves towards it. Experiments on Gibson MP3D and HM3D show the effectiveness of our method.
[]
[]
[]
[]
2,225
2,226
Towards Effective Usage of Human-Centric Priors in Diffusion Models for Text-based Human Image Generation
http://arxiv.org/abs/2403.05239
Junyan Wang, Zhenhong Sun, Zhiyu Tan, Xuanbai Chen, Weihua Chen, Hao Li, Cheng Zhang, Yang Song
2,403.05239
Vanilla text-to-image diffusion models struggle with generating accurate human images commonly resulting in imperfect anatomies such as unnatural postures or disproportionate limbs. Existing methods address this issue mostly by fine-tuning the model with extra images or adding additional controls --- human-centric priors such as pose or depth maps --- during the image generation phase. This paper explores the integration of these human-centric priors directly into the model fine-tuning stage essentially eliminating the need for extra conditions at the inference stage. We realize this idea by proposing a human-centric alignment loss to strengthen human-related information from the textual prompts within the cross-attention maps. To ensure semantic detail richness and human structural accuracy during fine-tuning we introduce scale-aware and step-wise constraints within the diffusion process according to an in-depth analysis of the cross-attention layer. Extensive experiments show that our method largely improves over state-of-the-art text-to-image models to synthesize high-quality human images based on user-written prompts.
[]
[]
[]
[]
2,226
2,227
A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization Inversion for Zero-Shot Video Editing
http://arxiv.org/abs/2312.05856
Maomao Li, Yu Li, Tianyu Yang, Yunfei Liu, Dongxu Yue, Zhihui Lin, Dong Xu
2,312.05856
This paper presents a video inversion approach for zero-shot video editing which models the input video with low-rank representation during the inversion process. The existing video editing methods usually apply the typical 2D DDIM inversion or naive spatial-temporal DDIM inversion before editing which leverages time-varying representation for each frame to derive noisy latent. Unlike most existing approaches we propose a Spatial-Temporal Expectation-Maximization (STEM) inversion which formulates the dense video feature under an expectation-maximization manner and iteratively estimates a more compact basis set to represent the whole video. Each frame applies the fixed and global representation for inversion which is more friendly for temporal consistency during reconstruction and editing. Extensive qualitative and quantitative experiments demonstrate that our STEM inversion can achieve consistent improvement on two state-of-the-art video editing methods. Project page: https://stem-inv.github.io/page/.
[]
[]
[]
[]
2,227
2,228
HIPTrack: Visual Tracking with Historical Prompts
http://arxiv.org/abs/2311.02072
Wenrui Cai, Qingjie Liu, Yunhong Wang
2,311.02072
Trackers that follow Siamese paradigm utilize similarity matching between template and search region features for tracking. Many methods have been explored to enhance tracking performance by incorporating tracking history to better handle scenarios involving target appearance variations such as deformation and occlusion. However the utilization of historical information in existing methods is insufficient and incomprehensive which typically requires repetitive training and introduces a large amount of computation. In this paper we show that by providing a tracker that follows Siamese paradigm with precise and updated historical information a significant performance improvement can be achieved with completely unchanged parameters. Based on this we propose a historical prompt network that uses refined historical foreground masks and historical visual features of the target to provide comprehensive and precise prompts for the tracker. We build a novel tracker called HIPTrack based on the historical prompt network which achieves considerable performance improvements without the need to retrain the entire model. We conduct experiments on seven datasets and experimental results demonstrate that our method surpasses the current state-of-the-art trackers on LaSOT LaSOText GOT-10k and NfS. Furthermore the historical prompt network can seamlessly integrate as a plug-and-play module into existing trackers providing performance enhancements. The source code is available at https://github.com/WenRuiCai/HIPTrack.
[]
[]
[]
[]
2,228
2,229
URHand: Universal Relightable Hands
http://arxiv.org/abs/2401.05334
Zhaoxi Chen, Gyeongsik Moon, Kaiwen Guo, Chen Cao, Stanislav Pidhorskyi, Tomas Simon, Rohan Joshi, Yuan Dong, Yichen Xu, Bernardo Pires, He Wen, Lucas Evans, Bo Peng, Julia Buffalini, Autumn Trimble, Kevyn McPhail, Melissa Schoeller, Shoou-I Yu, Javier Romero, Michael Zollhofer, Yaser Sheikh, Ziwei Liu, Shunsuke Saito
2,401.05334
Existing photorealistic relightable hand models require extensive identity-specific observations in different views poses and illuminations and face challenges in generalizing to natural illuminations and novel identities. To bridge this gap we present URHand the first universal relightable hand model that generalizes across viewpoints poses illuminations and identities. Our model allows few-shot personalization using images captured with a mobile phone and is ready to be photorealistically rendered under novel illuminations. To simplify the personalization process while retaining photorealism we build a powerful universal relightable prior based on neural relighting from multi-view images of hands captured in a light stage with hundreds of identities. The key challenge is scaling the cross-identity training while maintaining personalized fidelity and sharp details without compromising generalization under natural illuminations. To this end we propose a spatially varying linear lighting model as the neural renderer that takes physics-inspired shading as input feature. By removing non-linear activations and bias our specifically designed lighting model explicitly keeps the linearity of light transport. This enables single-stage training from light-stage data while generalizing to real-time rendering under arbitrary continuous illuminations across diverse identities. In addition we introduce the joint learning of a physically based model and our neural relighting model which further improves fidelity and generalization. Extensive experiments show that our approach achieves superior performance over existing methods in terms of both quality and generalizability. We also demonstrate quick personalization of URHand from a short phone scan of an unseen identity.
[]
[]
[]
[]
2,229
2,230
An N-Point Linear Solver for Line and Motion Estimation with Event Cameras
Ling Gao, Daniel Gehrig, Hang Su, Davide Scaramuzza, Laurent Kneip
null
Event cameras respond primarily to edges---formed by strong gradients---and are thus particularly well-suited for line-based motion estimation. Recent work has shown that events generated by a single line each satisfy a polynomial constraint which describes a manifold in the space-time volume. Multiple such constraints can be solved simultaneously to recover the partial linear velocity and line parameters. In this work we show that with a suitable line parametrization this system of constraints is actually linear in the unknowns which allows us to design a novel linear solver. Unlike existing solvers our linear solver (i) is fast and numerically stable since it does not rely on expensive root finding (ii) can solve both minimal and overdetermined systems with more than 5 events and (iii) admits the characterization of all degenerate cases and multiple solutions. The found line parameters are singularity-free and have a fixed scale which eliminates the need for auxiliary constraints typically encountered in previous work. To recover the full linear camera velocity we fuse observations from multiple lines with a novel velocity averaging scheme that relies on a geometrically-motivated residual and thus solves the problem more efficiently than previous schemes which minimize an algebraic residual. Extensive experiments in synthetic and real-world settings demonstrate that our method surpasses the previous work in numerical stability and operates over 600 times faster.
[]
[]
[]
[]
2,230
2,231
GenNBV: Generalizable Next-Best-View Policy for Active 3D Reconstruction
http://arxiv.org/abs/2402.16174
Xiao Chen, Quanyi Li, Tai Wang, Tianfan Xue, Jiangmiao Pang
2,402.16174
While recent advances in neural radiance field enable realistic digitization for large-scale scenes the image-capturing process is still time-consuming and labor-intensive. Previous works attempt to automate this process using the Next-Best-View (NBV) policy for active 3D reconstruction. However the existing NBV policies heavily rely on hand-crafted criteria limited action space or per-scene optimized representations. These constraints limit their cross-dataset generalizability. To overcome them we propose GenNBV an end-to-end generalizable NBV policy. Our policy adopts a reinforcement learning (RL)-based framework and extends typical limited action space to 5D free space. It empowers our agent drone to scan from any viewpoint and even interact with unseen geometries during training. To boost the cross-dataset generalizability we also propose a novel multi-source state embedding including geometric semantic and action representations. We establish a benchmark using the Isaac Gym simulator with the Houses3K and OmniObject3D datasets to evaluate this NBV policy. Experiments demonstrate that our policy achieves a 98.26% and 97.12% coverage ratio on unseen building-scale objects from these datasets respectively outperforming prior solutions.
[]
[]
[]
[]
2,231
2,232
Deep-TROJ: An Inference Stage Trojan Insertion Algorithm through Efficient Weight Replacement Attack
Sabbir Ahmed, Ranyang Zhou, Shaahin Angizi, Adnan Siraj Rakin
null
To insert Trojan into a Deep Neural Network (DNN) the existing attack assumes the attacker can access the victim's training facilities. However a realistic threat model was recently developed by leveraging memory fault to inject Trojans at the inference stage. In this work we develop a novel Trojan attack by adopting a unique memory fault injection technique that can inject bit-flip into the page table of the main memory. In the main memory each weight block consists of a group of weights located at a specific address of a DRAM row. A bit-flip in the page frame number replaces a target weight block of a DNN model with another replacement weight block. To develop a successful Trojan attack leveraging this unique fault model the attacker must solve three key challenges: i) how to identify a minimum set of target weight blocks to be modified? ii) how to identify the corresponding optimal replacement weight block? iii) how to optimize the trigger to maximize the attacker's objective given a target and replacement weight block set? We address them by proposing a novel Deep-TROJ attack algorithm that can identify a minimum set of vulnerable target and corresponding replacement weight blocks while optimizing the trigger at the same time. We evaluate the performance of our proposed Deep-TROJ on CIFAR-10 CIFAR-100 and ImageNet dataset for sixteen different DNN architectures including vision transformers. Proposed Deep-TROJ is the most successful one to date that does not require access to training facilities while successfully bypassing the existing defenses. Our code is available at https://github.com/ML-Security-Research-LAB/Deep-TROJ.
[]
[]
[]
[]
2,232
2,233
Investigating and Mitigating the Side Effects of Noisy Views for Self-Supervised Clustering Algorithms in Practical Multi-View Scenarios
http://arxiv.org/abs/2303.17245
Jie Xu, Yazhou Ren, Xiaolong Wang, Lei Feng, Zheng Zhang, Gang Niu, Xiaofeng Zhu
2,303.17245
Multi-view clustering (MVC) aims at exploring category structures among multi-view data in self-supervised manners. Multiple views provide more information than single views and thus existing MVC methods can achieve satisfactory performance. However their performance might seriously degenerate when the views are noisy in practical multi-view scenarios. In this paper we formally investigate the drawback of noisy views and then propose a theoretically grounded deep MVC method (namely MVCAN) to address this issue. Specifically we propose a novel MVC objective that enables un-shared parameters and inconsistent clustering predictions across multiple views to reduce the side effects of noisy views. Furthermore a two-level multi-view iterative optimization is designed to generate robust learning targets for refining individual views' representation learning. Theoretical analysis reveals that MVCAN works by achieving the multi-view consistency complementarity and noise robustness. Finally experiments on extensive public datasets demonstrate that MVCAN outperforms state-of-the-art methods and is robust against the existence of noisy views.
[]
[]
[]
[]
2,233
2,234
EvalCrafter: Benchmarking and Evaluating Large Video Generation Models
http://arxiv.org/abs/2310.11440
Yaofang Liu, Xiaodong Cun, Xuebo Liu, Xintao Wang, Yong Zhang, Haoxin Chen, Yang Liu, Tieyong Zeng, Raymond Chan, Ying Shan
2,310.1144
The vision and language generative models have been overgrown in recent years. For video generation various open-sourced models and public-available services have been developed to generate high-quality videos. However these methods often use a few metrics e.g. FVD or IS to evaluate the performance. We argue that it is hard to judge the large conditional generative models from the simple metrics since these models are often trained on very large datasets with multi-aspect abilities. Thus we propose a novel framework and pipeline for exhaustively evaluating the performance of the generated videos. Our approach involves generating a diverse and comprehensive list of 700 prompts for text-to-video generation which is based on an analysis of real-world user data and generated with the assistance of a large language model. Then we evaluate the state-of-the-art video generative models on our carefully designed benchmark in terms of visual qualities content qualities motion qualities and text-video alignment with 17 well-selected objective metrics. To obtain the final leaderboard of the models we further fit a series of coefficients to align the objective metrics to the users' opinions. Based on the proposed human alignment method our final score shows a higher correlation than simply averaging the metrics showing the effectiveness of the proposed evaluation method.
[]
[]
[]
[]
2,234
2,235
SelfOcc: Self-Supervised Vision-Based 3D Occupancy Prediction
http://arxiv.org/abs/2311.12754
Yuanhui Huang, Wenzhao Zheng, Borui Zhang, Jie Zhou, Jiwen Lu
2,311.12754
3D occupancy prediction is an important task for the robustness of vision-centric autonomous driving which aims to predict whether each point is occupied in the surrounding 3D space. Existing methods usually require 3D occupancy labels to produce meaningful results. However it is very laborious to annotate the occupancy status of each voxel. In this paper we propose SelfOcc to explore a self-supervised way to learn 3D occupancy using only video sequences. We first transform the images into the 3D space (e.g. bird's eye view) to obtain 3D representation of the scene. We directly impose constraints on the 3D representations by treating them as signed distance fields. We can then render 2D images of previous and future frames as self-supervision signals to learn the 3D representations. We propose an MVS-embedded strategy to directly optimize the SDF-induced weights with multiple depth proposals. Our SelfOcc outperforms the previous best method SceneRF by 58.7% using a single frame as input on SemanticKITTI and is the first self-supervised work that produces reasonable 3D occupancy for surround cameras on nuScenes. SelfOcc produces high-quality depth and achieves state-of-the-art results on novel depth synthesis monocular depth estimation and surround-view depth estimation on the SemanticKITTI KITTI-2015 and nuScenes respectively. Code: https://github.com/huang-yh/SelfOcc.
[]
[]
[]
[]
2,235
2,236
SubT-MRS Dataset: Pushing SLAM Towards All-weather Environments
http://arxiv.org/abs/2307.07607
Shibo Zhao, Yuanjun Gao, Tianhao Wu, Damanpreet Singh, Rushan Jiang, Haoxiang Sun, Mansi Sarawata, Yuheng Qiu, Warren Whittaker, Ian Higgins, Yi Du, Shaoshu Su, Can Xu, John Keller, Jay Karhade, Lucas Nogueira, Sourojit Saha, Ji Zhang, Wenshan Wang, Chen Wang, Sebastian Scherer
2,307.07607
Simultaneous localization and mapping (SLAM) is a fundamental task for numerous applications such as autonomous navigation and exploration. Despite many SLAM datasets have been released current SLAM solutions still struggle to have sustained and resilient performance. One major issue is the absence of high-quality datasets including diverse all-weather conditions and a reliable metric for assessing robustness. This limitation significantly restricts the scalability and generalizability of SLAM technologies impacting their development validation and deployment. To address this problem we present SubT-MRS an extremely challenging real-world dataset designed to push SLAM towards all-weather environments to pursue the most robust SLAM performance. It contains multi-degraded environments including over 30 diverse scenes such as structureless corridors varying lighting conditions and perceptual obscurants like smoke and dust; multimodal sensors such as LiDAR fisheye camera IMU and thermal camera; and multiple locomotions like aerial legged and wheeled robots. We developed accuracy and robustness evaluation tracks for SLAM and introduced novel robustness metrics. Comprehensive studies are performed revealing new observations challenges and opportunities for future research.
[]
[]
[]
[]
2,236
2,237
Named Entity Driven Zero-Shot Image Manipulation
Zhida Feng, Li Chen, Jing Tian, JiaXiang Liu, Shikun Feng
null
We introduced StyleEntity a zero-shot image manipulation model that utilizes named entities as proxies during its training phase. This strategy enables our model to manipulate images using unseen textual descriptions during inference all within a single training phase. Additionally we proposed an inference technique termed Prompt Ensemble Latent Averaging (PELA). PELA averages the manipulation directions derived from various named entities during inference effectively eliminating the noise directions thus achieving stable manipulation. In our experiments StyleEntity exhibited superior performance in a zero-shot setting compared to other methods. The code model weights and datasets is available at https://github.com/feng-zhida/StyleEntity.
[]
[]
[]
[]
2,237
2,238
Relational Matching for Weakly Semi-Supervised Oriented Object Detection
Wenhao Wu, Hau-San Wong, Si Wu, Tianyou Zhang
null
Oriented object detection has witnessed significant progress in recent years. However the impressive performance of oriented object detectors is at the huge cost of labor-intensive annotations and deteriorates once the annotated data becomes limited. Semi-supervised learning in which sufficient unannotated data are utilized to enhance the base detector is a promising method to address the annotation deficiency problem. Motivated by weakly supervised learning we introduce annotation-efficient point annotations for unannotated images and propose a weakly semi-supervised method for oriented object detection to balance the detection performance and annotation cost. Specifically we propose a Rotation-Modulated Relational Graph Matching method to match relations of proposals centered on annotated points between different models to alleviate the ambiguity of point annotations in depicting the oriented object. In addition we further propose a Relational Rank Distribution Matching method to align the rank distribution on classification and regression between different models. Finally to handle the difficult annotated points that both models are confused about we introduce weakly supervised learning to impose positive signals for difficult point-induced clusters to the base model and focus the base model on the occupancy between the predictions and annotated points. We perform extensive experiments on challenging datasets to demonstrate the effectiveness of our proposed weakly semi-supervised method in effectively leveraging unannotated data for significant performance improvement.
[]
[]
[]
[]
2,238
2,239
Rethinking the Representation in Federated Unsupervised Learning with Non-IID Data
http://arxiv.org/abs/2403.16398
Xinting Liao, Weiming Liu, Chaochao Chen, Pengyang Zhou, Fengyuan Yu, Huabin Zhu, Binhui Yao, Tao Wang, Xiaolin Zheng, Yanchao Tan
2,403.16398
Federated learning achieves effective performance in modeling decentralized data. In practice client data are not well-labeled which makes it potential for federated unsupervised learning (FUSL) with non-IID data. However the performance of existing FUSL methods suffers from insufficient representations i.e. (1) representation collapse entanglement among local and global models and (2) inconsistent representation spaces among local models. The former indicates that representation collapse in local model will subsequently impact the global model and other local models. The latter means that clients model data representation with inconsistent parameters due to the deficiency of supervision signals. In this work we propose FedU2 which enhances generating uniform and unified representation in FUSL with non-IID data. Specifically FedU2 consists of flexible uniform regularizer (FUR) and efficient unified aggregator (EUA). FUR in each client avoids representation collapse via dispersing samples uniformly and EUA in server promotes unified representation by constraining consistent client model updating. To extensively validate the performance of FedU2 we conduct both cross-device and cross-silo evaluation experiments on two benchmark datasets i.e. CIFAR10 and CIFAR100.
[]
[]
[]
[]
2,239
2,240
Distraction is All You Need: Memory-Efficient Image Immunization against Diffusion-Based Image Editing
Ling Lo, Cheng Yu Yeo, Hong-Han Shuai, Wen-Huang Cheng
null
Recent text-to-image (T2I) diffusion models have revolutionized image editing by empowering users to control outcomes using natural language. However the ease of image manipulation has raised ethical concerns with the potential for malicious use in generating deceptive or harmful content. To address the concerns we propose an image immunization approach named semantic attack to protect our images from being manipulated by malicious agents using diffusion models. Our approach focuses on disrupting the semantic understanding of T2I diffusion models regarding specific content. By attacking the cross-attention mechanism that encodes image features with text messages during editing we distract the model's attention regarding the content of our concern. Our semantic attack renders the model uncertain about the areas to edit resulting in poorly edited images and contradicting the malicious editing attempts. In addition by shifting the attack target towards intermediate attention maps from the final generated image our approach substantially diminishes computational burden and alleviates GPU memory constraints in comparison to previous methods. Moreover we introduce timestep universal gradient updating to create timestep-agnostic perturbations effective across different input noise levels. By treating the full diffusion process as discrete denoising timesteps during the attack we achieve equivalent or even superior immunization efficacy with nearly half the memory consumption of the previous method. Our contributions include a practical and effective approach to safeguard images against malicious editing and the proposed method offers robust immunization against various image inpainting and editing approaches showcasing its potential for real-world applications.
[]
[]
[]
[]
2,240
2,241
Knowledge-Enhanced Dual-stream Zero-shot Composed Image Retrieval
http://arxiv.org/abs/2403.16005
Yucheng Suo, Fan Ma, Linchao Zhu, Yi Yang
2,403.16005
We study the zero-shot Composed Image Retrieval (ZS-CIR) task which is to retrieve the target image given a reference image and a description without training on the triplet datasets. Previous works generate pseudo-word tokens by projecting the reference image features to the text embedding space. However they focus on the global visual representation ignoring the representation of detailed attributes e.g. color object number and layout. To address this challenge we propose a Knowledge-Enhanced Dual-stream zero-shot composed image retrieval framework (KEDs). KEDs implicitly models the attributes of the reference images by incorporating a database. The database enriches the pseudo-word tokens by providing relevant images and captions emphasizing shared attribute information in various aspects. In this way KEDs recognizes the reference image from diverse perspectives. Moreover KEDs adopts an extra stream that aligns pseudo-word tokens with textual concepts leveraging pseudo-triplets mined from image-text pairs. The pseudo-word tokens generated in this stream are explicitly aligned with fine-grained semantics in the text embedding space. Extensive experiments on widely used benchmarks i.e. ImageNet-R COCO object Fashion-IQ and CIRR show that KEDs outperforms previous zero-shot composed image retrieval methods. Code is available at https://github.com/suoych/KEDs.
[]
[]
[]
[]
2,241
2,242
Taming Self-Training for Open-Vocabulary Object Detection
http://arxiv.org/abs/2308.06412
Shiyu Zhao, Samuel Schulter, Long Zhao, Zhixing Zhang, Vijay Kumar B G, Yumin Suh, Manmohan Chandraker, Dimitris N. Metaxas
2,308.06412
Recent studies have shown promising performance in open-vocabulary object detection (OVD) by utilizing pseudo labels (PLs) from pretrained vision and language models (VLMs). However teacher-student self-training a powerful and widely used paradigm to leverage PLs is rarely explored for OVD. This work identifies two challenges of using self-training in OVD: noisy PLs from VLMs and frequent distribution changes of PLs. To address these challenges we propose SAS-Det that tames self-training for OVD from two key perspectives. First we present a split-and-fusion (SAF) head that splits a standard detection into an open-branch and a closed-branch. This design can reduce noisy supervision from pseudo boxes. Moreover the two branches learn complementary knowledge from different training data significantly enhancing performance when fused together. Second in our view unlike in closed-set tasks the PL distributions in OVD are solely determined by the teacher model. We introduce a periodic update strategy to decrease the number of updates to the teacher thereby decreasing the frequency of changes in PL distributions which stabilizes the training process. Extensive experiments demonstrate SAS-Det is both efficient and effective. SAS-Det outperforms recent models of the same scale by a clear margin and achieves 37.4 AP50 and 29.1 APr on novel categories of the COCO and LVIS benchmarks respectively. Code is available at https://github.com/xiaofeng94/SAS-Det.
[]
[]
[]
[]
2,242
2,243
Grounding and Enhancing Grid-based Models for Neural Fields
http://arxiv.org/abs/2403.20002
Zelin Zhao, Fenglei Fan, Wenlong Liao, Junchi Yan
2,403.20002
Many contemporary studies utilize grid-based models for neural field representation but a systematic analysis of grid-based models is still missing hindering the improvement of those models. Therefore this paper introduces a theoretical framework for grid-based models. This framework points out that these models' approximation and generalization behaviors are determined by grid tangent kernels (GTK) which are intrinsic properties of grid-based models. The proposed framework facilitates a consistent and systematic analysis of diverse grid-based models. Furthermore the introduced framework motivates the development of a novel grid-based model named the Multiplicative Fourier Adaptive Grid (MulFAGrid). The numerical analysis demonstrates that MulFAGrid exhibits a lower generalization bound than its predecessors indicating its robust generalization performance. Empirical studies reveal that MulFAGrid achieves state-of-the-art performance in various tasks including 2D image fitting 3D signed distance field (SDF) reconstruction and novel view synthesis demonstrating superior representation ability. The project website is available at https://sites.google.com/view/cvpr24-2034-submission/home.
[]
[]
[]
[]
2,243
2,244
Bilateral Propagation Network for Depth Completion
http://arxiv.org/abs/2403.11270
Jie Tang, Fei-Peng Tian, Boshi An, Jian Li, Ping Tan
2,403.1127
Depth completion aims to derive a dense depth map from sparse depth measurements with a synchronized color image. Current state-of-the-art (SOTA) methods are predominantly propagation-based which work as an iterative refinement on the initial estimated dense depth. However the initial depth estimations mostly result from direct applications of convolutional layers on the sparse depth map. In this paper we present a Bilateral Propagation Network (BP-Net) that propagates depth at the earliest stage to avoid directly convolving on sparse data. Specifically our approach propagates the target depth from nearby depth measurements via a non-linear model whose coefficients are generated through a multi-layer perceptron conditioned on both radiometric difference and spatial distance. By integrating bilateral propagation with multi-modal fusion and depth refinement in a multi-scale framework our BP-Net demonstrates outstanding performance on both indoor and outdoor scenes. It achieves SOTA on the NYUv2 dataset and ranks 1st on the KITTI depth completion benchmark at the time of submission. Experimental results not only show the effectiveness of bilateral propagation but also emphasize the significance of early-stage propagation in contrast to the refinement stage. Our code and trained models will be available on the project page.
[]
[]
[]
[]
2,244
2,245
ESR-NeRF: Emissive Source Reconstruction Using LDR Multi-view Images
Jinseo Jeong, Junseo Koo, Qimeng Zhang, Gunhee Kim
null
Existing NeRF-based inverse rendering methods suppose that scenes are exclusively illuminated by distant light sources neglecting the potential influence of emissive sources within a scene. In this work we confront this limitation using LDR multi-view images captured with emissive sources turned on and off. Two key issues must be addressed: 1) ambiguity arising from the limited dynamic range along with unknown lighting details and 2) the expensive computational cost in volume rendering to backtrace the paths leading to final object colors. We present a novel approach ESR-NeRF leveraging neural networks as learnable functions to represent ray-traced fields. By training networks to satisfy light transport segments we regulate outgoing radiances progressively identifying emissive sources while being aware of reflection areas. The results on scenes encompassing emissive sources with various properties demonstrate the superiority of ESR-NeRF in qualitative and quantitative ways. Our approach also extends its applicability to the scenes devoid of emissive sources achieving lower CD metrics on the DTU dataset.
[]
[]
[]
[]
2,245
2,246
Infer from What You Have Seen Before: Temporally-dependent Classifier for Semi-supervised Video Segmentation
Jiafan Zhuang, Zilei Wang, Yixin Zhang, Zhun Fan
null
Due to high expense of human labor one major challenge for semantic segmentation in real-world scenarios is the lack of sufficient pixel-level labels which is more serious when processing video data. To exploit unlabeled data for model training semi-supervised learning methods attempt to construct pseudo labels or various auxiliary constraints as supervision signals. However most of them just process video data as a set of independent images in a per-frame manner. The rich temporal relationships are ignored which can serve as valuable clues for representation learning. Besides this per-frame recognition paradigm is quite different from that of humans. Actually benefited from the internal temporal relevance of video data human would wisely use the distinguished semantic concepts in historical frames to aid the recognition of the current frame. Motivated by this observation we propose a novel temporally-dependent classifier (TDC) to mimic the human-like recognition procedure. Comparing to the conventional classifier TDC can guide the model to learn a group of temporally-consistent semantic concepts across frames which essentially provides an implicit and effective constraint. We conduct extensive experiments on Cityscapes and CamVid and the results demonstrate the superiority of our proposed method to previous state-of-the-art methods. The code is available at https://github.com/jfzhuang/TDC.
[]
[]
[]
[]
2,246
2,247
Unleashing Channel Potential: Space-Frequency Selection Convolution for SAR Object Detection
Ke Li, Di Wang, Zhangyuan Hu, Wenxuan Zhu, Shaofeng Li, Quan Wang
null
Deep Convolutional Neural Networks (DCNNs) have achieved remarkable performance in synthetic aperture radar (SAR) object detection but this comes at the cost of tremendous computational resources partly due to extracting redundant features within a single convolutional layer. Recent works either delve into model compression methods or focus on the carefully-designed lightweight models both of which result in performance degradation. In this paper we propose an efficient convolution module for SAR object detection called SFS-Conv which increases feature diversity within each convolutional layer through a shunt-perceive-select strategy. Specifically we shunt input feature maps into space and frequency aspects. The former perceives the context of various objects by dynamically adjusting receptive field while the latter captures abundant frequency variations and textural features via fractional Gabor transformer. To adaptively fuse features from space and frequency aspects a parameter-free feature selection module is proposed to ensure that the most representative and distinctive information are preserved. With SFS-Conv we build a lightweight SAR object detection network called SFS-CNet. Experimental results show that SFS-CNet outperforms state-of-the-art (SoTA) models on a series of SAR object detection benchmarks while simultaneously reducing both the model size and computational cost.
[]
[]
[]
[]
2,247
2,248
READ: Retrieval-Enhanced Asymmetric Diffusion for Motion Planning
Takeru Oba, Matthew Walter, Norimichi Ukita
null
This paper proposes Retrieval-Enhanced Asymmetric Diffusion (READ) for image-based robot motion planning. Given an image of the scene READ retrieves an initial motion from a database of image-motion pairs and uses a diffusion model to refine the motion for the given scene. Unlike prior retrieval-based diffusion models that require long forward-reverse diffusion paths READ directly diffuses between the source (retrieved) and target motions resulting in an efficient diffusion path. A second contribution of READ is its use of asymmetric diffusion whereby it preserves the kinematic feasibility of the generated motion by forward diffusion in a low-dimensional latent space while achieving high-resolution motion by reverse diffusion in the original task space using cold diffusion. Experimental results on various manipulation tasks demonstrate that READ outperforms state-of-the-art planning methods while ablation studies elucidate the contributions of asymmetric diffusion.
[]
[]
[]
[]
2,248
2,249
Video Frame Interpolation via Direct Synthesis with the Event-based Reference
Yuhan Liu, Yongjian Deng, Hao Chen, Zhen Yang
null
Video Frame Interpolation (VFI) has witnessed a surge in popularity due to its abundant downstream applications. Event-based VFI (E-VFI) has recently propelled the advancement of VFI. Thanks to the high temporal resolution benefits event cameras can bridge the informational void present between successive video frames. Most state-of-the-art E-VFI methodologies follow the conventional VFI paradigm which pivots on motion estimation between consecutive frames to generate intermediate frames through a process of warping and refinement. However this reliance engenders a heavy dependency on the quality and consistency of keyframes rendering these methods susceptible to challenges in extreme real-world scenarios such as missing moving objects and severe occlusion dilemmas. This study proposes a novel E-VFI framework that directly synthesize intermediate frames leveraging event-based reference obviating the necessity for explicit motion estimation and substantially enhancing the capacity to handle motion occlusion. Given the sparse and inherently noisy nature of event data we prioritize the reliability of the event-based reference leading to the development of an innovative event-aware reconstruction strategy for accurate reference generation. Besides we implement a bi-directional event-guided alignment from keyframes to the reference using the introduced E-PCD module. Finally a transformer-based decoder is adopted for prediction refinement. Comprehensive experimental evaluations on both synthetic and real-world datasets underscore the superiority of our approach and its potential to execute high-quality VFI tasks.
[]
[]
[]
[]
2,249
2,250
DSL-FIQA: Assessing Facial Image Quality via Dual-Set Degradation Learning and Landmark-Guided Transformer
Wei-Ting Chen, Gurunandan Krishnan, Qiang Gao, Sy-Yen Kuo, Sizhou Ma, Jian Wang
null
Generic Face Image Quality Assessment (GFIQA) evaluates the perceptual quality of facial images which is crucial in improving image restoration algorithms and selecting high-quality face images for downstream tasks. We present a novel transformer-based method for GFIQA which is aided by two unique mechanisms. First a novel Dual-Set Degradation Representation Learning (DSL) mechanism uses facial images with both synthetic and real degradations to decouple degradation from content ensuring generalizability to real-world scenarios. This self-supervised method learns degradation features on a global scale providing a robust alternative to conventional methods that use local patch information in degradation learning. Second our transformer leverages facial landmarks to emphasize visually salient parts of a face image in evaluating its perceptual quality. We also introduce a balanced and diverse Comprehensive Generic Face IQA (CGFIQA-40k) dataset of 40K images carefully designed to overcome the biases in particular the imbalances in skin tone and gender representation in existing datasets. Extensive analysis and evaluation demonstrate the robustness of our method marking a significant improvement over prior methods.
[]
[]
[]
[]
2,250
2,251
FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring
Geunhyuk Youk, Jihyong Oh, Munchurl Kim
null
We present a joint learning scheme of video super-resolution and deblurring called VSRDB to restore clean high-resolution (HR) videos from blurry low-resolution (LR) ones. This joint restoration problem has drawn much less attention compared to single restoration problems. In this paper we propose a novel flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA) which constitutes our VSRDB framework denoted as FMA-Net. Specifically our proposed FGDF enables precise estimation of both spatio-temporally-variant degradation and restoration kernels that are aware of motion trajectories through sophisticated motion representation learning. Compared to conventional dynamic filtering the FGDF enables the FMA-Net to effectively handle large motions into the VSRDB. Additionally the stacked FRMA blocks trained with our novel temporal anchor (TA) loss which temporally anchors and sharpens features refine features in a coarse-to-fine manner through iterative updates. Extensive experiments demonstrate the superiority of the proposed FMA-Net over state-of-the-art methods in terms of both quantitative and qualitative quality. Codes and pre-trained models are available at: https://kaist-viclab.github.io/fmanet-site.
[]
[]
[]
[]
2,251
2,252
OVMR: Open-Vocabulary Recognition with Multi-Modal References
Zehong Ma, Shiliang Zhang, Longhui Wei, Qi Tian
null
The challenge of open-vocabulary recognition lies in the model has no clue of new categories it is applied to. Existing works have proposed different methods to embed category cues into the model e.g. through few-shot fine-tuning providing category names or textual descriptions to Vision-Language Models. Fine-tuning is time-consuming and degrades the generalization capability. Textual descriptions could be ambiguous and fail to depict visual details. This paper tackles open-vocabulary recognition from a different perspective by referring to multi-modal clues composed of textual descriptions and exemplar images. Our method named OVMR adopts two innovative components to pursue a more robust category cues embedding. A multi-modal classifier is first generated by dynamically complementing textual descriptions with image exemplars. A preference-based refinement module is hence applied to fuse uni-modal and multi-modal classifiers with the aim to alleviate issues of low-quality exemplar images or textual descriptions. The proposed OVMR is a plug-and-play module and works well with exemplar images randomly crawled from the Internet. Extensive experiments have demonstrated the promising performance of OVMR e.g. it outperforms existing methods across various scenarios and setups. Codes are publicly available at \href https://github.com/Zehong-Ma/OVMR https://github.com/Zehong-Ma/OVMR .
[]
[]
[]
[]
2,252
2,253
Hourglass Tokenizer for Efficient Transformer-Based 3D Human Pose Estimation
http://arxiv.org/abs/2311.12028
Wenhao Li, Mengyuan Liu, Hong Liu, Pichao Wang, Jialun Cai, Nicu Sebe
2,311.12028
Transformers have been successfully applied in the field of video-based 3D human pose estimation. However the high computational costs of these video pose transformers (VPTs) make them impractical on resource-constrained devices. In this paper we present a plug-and-play pruning-and-recovering framework called Hourglass Tokenizer (HoT) for efficient transformer-based 3D human pose estimation from videos. Our HoT begins with pruning pose tokens of redundant frames and ends with recovering full-length tokens resulting in a few pose tokens in the intermediate transformer blocks and thus improving the model efficiency. To effectively achieve this we propose a token pruning cluster (TPC) that dynamically selects a few representative tokens with high semantic diversity while eliminating the redundancy of video frames. In addition we develop a token recovering attention (TRA) to restore the detailed spatio-temporal information based on the selected tokens thereby expanding the network output to the original full-length temporal resolution for fast inference. Extensive experiments on two benchmark datasets (i.e. Human3.6M and MPI-INF-3DHP) demonstrate that our method can achieve both high efficiency and estimation accuracy compared to the original VPT models. For instance applying to MotionBERT and MixSTE on Human3.6M our HoT can save nearly 50% FLOPs without sacrificing accuracy and nearly 40% FLOPs with only 0.2% accuracy drop respectively. Code and models are available at https://github.com/NationalGAILab/HoT.
[]
[]
[]
[]
2,253
2,254
Boosting Diffusion Models with Moving Average Sampling in Frequency Domain
http://arxiv.org/abs/2403.17870
Yurui Qian, Qi Cai, Yingwei Pan, Yehao Li, Ting Yao, Qibin Sun, Tao Mei
2,403.1787
Diffusion models have recently brought a powerful revolution in image generation. Despite showing impressive generative capabilities most of these models rely on the current sample to denoise the next one possibly resulting in denoising instability. In this paper we reinterpret the iterative denoising process as model optimization and leverage a moving average mechanism to ensemble all the prior samples. Instead of simply applying moving average to the denoised samples at different timesteps we first map the denoised samples to data space and then perform moving average to avoid distribution shift across timesteps. In view that diffusion models evolve the recovery from low-frequency components to high-frequency details we further decompose the samples into different frequency components and execute moving average separately on each component. We name the complete approach "Moving Average Sampling in Frequency domain (MASF)". MASF could be seamlessly integrated into mainstream pre-trained diffusion models and sampling schedules. Extensive experiments on both unconditional and conditional diffusion models demonstrate that our MASF leads to superior performances compared to the baselines with almost negligible additional complexity cost.
[]
[]
[]
[]
2,254
2,255
GART: Gaussian Articulated Template Models
http://arxiv.org/abs/2311.16099
Jiahui Lei, Yufu Wang, Georgios Pavlakos, Lingjie Liu, Kostas Daniilidis
2,311.16099
We introduce Gaussian Articulated Template Model (GART) an explicit efficient and expressive representation for non-rigid articulated subject capturing and rendering from monocular videos. GART utilizes a mixture of moving 3D Gaussians to explicitly approximate a deformable subject's geometry and appearance. It takes advantage of a categorical template model prior (SMPL SMAL etc.) with learnable forward skinning while further generalizing to more complex non-rigid deformations with novel latent bones. GART can be reconstructed via differentiable rendering from monocular videos in seconds or minutes and rendered in novel poses faster than 150fps.
[]
[]
[]
[]
2,255
2,256
Global and Local Prompts Cooperation via Optimal Transport for Federated Learning
http://arxiv.org/abs/2403.00041
Hongxia Li, Wei Huang, Jingya Wang, Ye Shi
2,403.00041
Prompt learning in pretrained visual-language models has shown remarkable flexibility across various downstream tasks. Leveraging its inherent lightweight nature recent research attempted to integrate the powerful pretrained models into federated learning frameworks to simultaneously reduce communication costs and promote local training on insufficient data. Despite these efforts current federated prompt learning methods lack specialized designs to systematically address severe data heterogeneities e.g. data distribution with both label and feature shifts involved. To address this challenge we present Federated Prompts Cooperation via Optimal Transport (FedOTP) which introduces efficient collaborative prompt learning strategies to capture diverse category traits on a per-client basis. Specifically for each client we learn a global prompt to extract consensus knowledge among clients and a local prompt to capture client-specific category characteristics. Unbalanced Optimal Transport is then employed to align local visual features with these prompts striking a balance between global consensus and local personalization. By relaxing one of the equality constraints FedOTP enables prompts to focus solely on core image patch regions. Extensive experiments on datasets with various types of heterogeneities have demonstrated that our FedOTP outperforms the state-of-the-art methods.
[]
[]
[]
[]
2,256
2,257
Bi-Causal: Group Activity Recognition via Bidirectional Causality
Youliang Zhang, Wenxuan Liu, Danni Xu, Zhuo Zhou, Zheng Wang
null
Current approaches in Group Activity Recognition (GAR) predominantly emphasize Human Relations (HRs) while often neglecting the impact of Human-Object Interactions (HOIs). This study prioritizes the consideration of both HRs and HOIs emphasizing their interdependence. Notably employing Granger Causality Tests reveals the presence of bidirectional causality between HRs and HOIs. Leveraging this insight we propose a Bidirectional-Causal GAR network. This network establishes a causality communication channel while modeling relations and interactions enabling reciprocal enhancement between human-object interactions and human relations ensuring their mutual consistency. Additionally an Interaction Module is devised to effectively capture the dynamic nature of human-object interactions. Comprehensive experiments conducted on two publicly available datasets showcase the superiority of our proposed method over state-of-the-art approaches.
[]
[]
[]
[]
2,257
2,258
Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer
http://arxiv.org/abs/2311.17009
Danah Yatim, Rafail Fridman, Omer Bar-Tal, Yoni Kasten, Tali Dekel
2,311.17009
We present a new method for text-driven motion transfer - synthesizing a video that complies with an input text prompt describing the target objects and scene while maintaining an input video's motion and scene layout. Prior methods are confined to transferring motion across two subjects within the same or closely related object categories and are applicable for limited domains (e.g. humans). In this work we consider a significantly more challenging setting in which the target and source objects differ drastically in shape and fine-grained motion characteristics (e.g. translating a jumping dog into a dolphin). To this end we leverage a pre-trained and fixed text-to-video diffusion model which provides us with generative and motion priors. The pillar of our method is a new space-time feature loss derived directly from the model. This loss guides the generation process to preserve the overall motion of the input video while complying with the target object in terms of shape and fine-grained motion traits.
[]
[]
[]
[]
2,258
2,259
KP-RED: Exploiting Semantic Keypoints for Joint 3D Shape Retrieval and Deformation
Ruida Zhang, Chenyangguang Zhang, Yan Di, Fabian Manhardt, Xingyu Liu, Federico Tombari, Xiangyang Ji
null
In this paper we present KP-RED a unified KeyPoint-driven REtrieval and Deformation framework that takes object scans as input and jointly retrieves and deforms the most geometrically similar CAD models from a pre-processed database to tightly match the target. Unlike existing dense matching based methods that typically struggle with noisy partial scans we propose to leverage category-consistent sparse keypoints to naturally handle both full and partial object scans. Specifically we first employ a lightweight retrieval module to establish a keypoint-based embedding space measuring the similarity among objects by dynamically aggregating deformation-aware local-global features around extracted keypoints. Objects that are close in the embedding space are considered similar in geometry. Then we introduce the neural cage-based deformation module that estimates the influence vector of each keypoint upon cage vertices inside its local support region to control the deformation of the retrieved shape. Extensive experiments on the synthetic dataset PartNet and the real-world dataset Scan2CAD demonstrate that KP-RED surpasses existing state-of-the-art approaches by a large margin. Codes and trained models will be released in https://github.com/lolrudy/KP-RED.
[]
[]
[]
[]
2,259
2,260
Learning from One Continuous Video Stream
http://arxiv.org/abs/2312.00598
João Carreira, Michael King, Viorica Patraucean, Dilara Gokay, Catalin Ionescu, Yi Yang, Daniel Zoran, Joseph Heyward, Carl Doersch, Yusuf Aytar, Dima Damen, Andrew Zisserman
2,312.00598
We introduce a framework for online learning from a single continuous video stream - the way people and animals learn without mini-batches data augmentation or shuffling. This poses great challenges given the high correlation between consecutive video frames and there is very little prior work on it. Our framework allows us to do a first deep dive into the topic and includes a collection of streams and tasks composed from two existing video datasets plus methodology for performance evaluation that considers both adaptation and generalization. We employ pixel-to-pixel modelling as a practical and flexible way to switch between pre-training and single-stream evaluation as well as between arbitrary tasks without ever requiring changes to models and always using the same pixel loss. Equipped with this framework we obtained large single-stream learning gains from pre-training with a novel family of future prediction tasks found that momentum hurts and that the pace of weight updates matters. The combination of these insights leads to matching the performance of IID learning with batch size 1 when using the same architecture and without costly replay buffers.
[]
[]
[]
[]
2,260
2,261
VGGSfM: Visual Geometry Grounded Deep Structure From Motion
Jianyuan Wang, Nikita Karaev, Christian Rupprecht, David Novotny
null
Structure-from-motion (SfM) is a long-standing problem in the computer vision community which aims to reconstruct the camera poses and 3D structure of a scene from a set of unconstrained 2D images. Classical frameworks solve this problem in an incremental manner by detecting and matching keypoints registering images triangulating 3D points and conducting bundle adjustment. Recent research efforts have predominantly revolved around harnessing the power of deep learning techniques to enhance specific elements (e.g. keypoint matching) but are still based on the original non-differentiable pipeline. Instead we propose a new deep SfM pipeline where each component is fully differentiable and thus can be trained in an end-to-end manner. To this end we introduce new mechanisms and simplifications. First we build on recent advances in deep 2D point tracking to extract reliable pixel-accurate tracks which eliminates the need for chaining pairwise matches. Furthermore we recover all cameras simultaneously based on the image and track features instead of gradually registering cameras. Finally we optimise the cameras and triangulate 3D points via a differentiable bundle adjustment layer. We attain state-of-the-art performance on three popular datasets CO3D IMC Phototourism and ETH3D.
[]
[]
[]
[]
2,261
2,262
MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis
http://arxiv.org/abs/2402.05408
Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, Yi Yang
2,402.05408
We present a Multi-Instance Generation (MIG) task simultaneously generating multiple instances with diverse controls in one image. Given a set of predefined coordinates and their corresponding descriptions the task is to ensure that generated instances are accurately at the designated locations and that all instances' attributes adhere to their corresponding description. This broadens the scope of current research on Single-instance generation elevating it to a more versatile and practical dimension. Inspired by the idea of divide and conquer we introduce an innovative approach named Multi-Instance Generation Controller (MIGC) to address the challenges of the MIG task. Initially we break down the MIG task into several subtasks each involving the shading of a single instance. To ensure precise shading for each instance we introduce an instance enhancement attention mechanism. Lastly we aggregate all the shaded instances to provide the necessary information for accurately generating multiple instances in stable diffusion (SD). To evaluate how well generation models perform on the MIG task we provide a COCO-MIG benchmark along with an evaluation pipeline. Extensive experiments were conducted on the proposed COCO-MIG benchmark as well as on various commonly used benchmarks. The evaluation results illustrate the exceptional control capabilities of our model in terms of quantity position attribute and interaction. Code and demos will be released at https://migcproject.github.io/.
[]
[]
[]
[]
2,262
2,263
Distilling CLIP with Dual Guidance for Learning Discriminative Human Body Shape Representation
Feng Liu, Minchul Kim, Zhiyuan Ren, Xiaoming Liu
null
Person Re-Identification (ReID) holds critical importance in computer vision with pivotal applications in public safety and crime prevention. Traditional ReID methods reliant on appearance attributes such as clothing and color encounter limitations in long-term scenarios and dynamic environments. To address these challenges we propose CLIP3DReID an innovative approach that enhances person ReID by integrating linguistic descriptions with visual perception leveraging pretrained CLIP model for knowledge distillation. Our method first employs CLIP to automatically label body shapes with linguistic descriptors. We then apply optimal transport theory to align the student model's local visual features with shape-aware tokens derived from CLIP's linguistic output. Additionally we align the student model's global visual features with those from the CLIP image encoder and the 3D SMPL identity space fostering enhanced domain robustness. CLIP3DReID notably excels in discerning discriminative body shape features achieving state-of-the-art results in person ReID. Our approach represents a significant advancement in ReID offering robust solutions to existing challenges and setting new directions for future research.
[]
[]
[]
[]
2,263
2,264
Retrieval-Augmented Open-Vocabulary Object Detection
http://arxiv.org/abs/2404.05687
Jooyeon Kim, Eulrang Cho, Sehyung Kim, Hyunwoo J. Kim
2,404.05687
Open-vocabulary object detection (OVD) has been studied with Vision-Language Models (VLMs) to detect novel objects beyond the pre-trained categories. Previous approaches improve the generalization ability to expand the knowledge of the detector using 'positive' pseudo-labels with additional 'class' names e.g. sock iPod and alligator. To extend the previous methods in two aspects we propose Retrieval-Augmented Losses and visual Features (RALF). Our method retrieves related 'negative' classes and augments loss functions. Also visual features are augmented with 'verbalized concepts' of classes e.g. worn on the feet handheld music player and sharp teeth. Specifically RALF consists of two modules: Retrieval Augmented Losses (RAL) and Retrieval-Augmented visual Features (RAF). RAL constitutes two losses reflecting the semantic similarity with negative vocabularies. In addition RAF augments visual features with the verbalized concepts from a large language model (LLM). Our experiments demonstrate the effectiveness of RALF on COCO and LVIS benchmark datasets. We achieve improvement up to 3.4 box AP_ 50 ^ \text N on novel categories of the COCO dataset and 3.6 mask AP_ \text r gains on the LVIS dataset. Code is available at https://github.com/mlvlab/RALF.
[]
[]
[]
[]
2,264
2,265
MULTIFLOW: Shifting Towards Task-Agnostic Vision-Language Pruning
http://arxiv.org/abs/2404.05621
Matteo Farina, Massimiliano Mancini, Elia Cunegatti, Gaowen Liu, Giovanni Iacca, Elisa Ricci
2,404.05621
While excellent in transfer learning Vision-Language models (VLMs) come with high computational costs due to their large number of parameters. To address this issue removing parameters via model pruning is a viable solution. However existing techniques for VLMs are task-specific and thus require pruning the network from scratch for each new task of interest. In this work we explore a new direction: Task-Agnostic Vision-Language Pruning (TA-VLP). Given a pretrained VLM the goal is to find a unique pruned counterpart transferable to multiple unknown downstream tasks. In this challenging setting the transferable representations already encoded in the pretrained model are a key aspect to preserve. Thus we propose Multimodal Flow Pruning (MULTIFLOW) a first gradient-free pruning framework for TA-VLP where: (i) the importance of a parameter is expressed in terms of its magnitude and its information flow by incorporating the saliency of the neurons it connects; and (ii) pruning is driven by the emergent (multimodal) distribution of the VLM parameters after pretraining. We benchmark eight state-of-the-art pruning algorithms in the context of TA-VLP experimenting with two VLMs three vision-language tasks and three pruning ratios. Our experimental results show that MULTIFLOW outperforms recent sophisticated combinatorial competitors in the vast majority of the cases paving the way towards addressing TA-VLP. The code is publicly available at https://github.com/FarinaMatteo/multiflow.
[]
[]
[]
[]
2,265
2,266
Spin-UP: Spin Light for Natural Light Uncalibrated Photometric Stereo
Zongrui Li, Zhan Lu, Haojie Yan, Boxin Shi, Gang Pan, Qian Zheng, Xudong Jiang
null
Natural Light Uncalibrated Photometric Stereo (NaUPS) relieves the strict environment and light assumptions in classical Uncalibrated Photometric Stereo (UPS) methods. However due to the intrinsic ill-posedness and high-dimensional ambiguities addressing NaUPS is still an open question. Existing works impose strong assumptions on the environment lights and objects' material restricting the effectiveness in more general scenarios. Alternatively some methods leverage supervised learning with intricate models while lacking interpretability resulting in a biased estimation. In this work we proposed Spin Light Uncalibrated Photometric Stereo (Spin-UP) an unsupervised method to tackle NaUPS in various environment lights and objects. The proposed method uses a novel setup that captures the object's images on a rotatable platform which mitigates NaUPS's ill-posedness by reducing unknowns and provides reliable priors to alleviate NaUPS's ambiguities. Leveraging neural inverse rendering and the proposed training strategies Spin-UP recovers surface normals environment light and isotropic reflectance under complex natural light with low computational cost. Experiments have shown that Spin-UP outperforms other supervised / unsupervised NaUPS methods and achieves state-of-the-art performance on synthetic and real-world datasets. Codes and data are available at https://github.com/LMozart/CVPR2024-SpinUP.
[]
[]
[]
[]
2,266
2,267
LLaFS: When Large Language Models Meet Few-Shot Segmentation
http://arxiv.org/abs/2311.16926
Lanyun Zhu, Tianrun Chen, Deyi Ji, Jieping Ye, Jun Liu
2,311.16926
This paper proposes LLaFS the first attempt to leverage large language models (LLMs) in few-shot segmentation. In contrast to the conventional few-shot segmentation methods that only rely on the limited and biased information from the annotated support images LLaFS leverages the vast prior knowledge gained by LLM as an effective supplement and directly uses the LLM to segment images in a few-shot manner. To enable the text-based LLM to handle image-related tasks we carefully design an input instruction that allows the LLM to produce segmentation results represented as polygons and propose a region-attribute table to simulate the human visual mechanism and provide multi-modal guidance. We also synthesize pseudo samples and use curriculum learning for pretraining to augment data and achieve better optimization. LLaFS achieves state-of-the-art results on multiple datasets showing the potential of using LLMs for few-shot computer vision tasks.
[]
[]
[]
[]
2,267
2,268
Kernel Adaptive Convolution for Scene Text Detection via Distance Map Prediction
Jinzhi Zheng, Heng Fan, Libo Zhang
null
Segmentation-based scene text detection algorithms that are accurate to the pixel level can satisfy the detection of arbitrary shape scene text and have received widespread attention. On the one hand due to the complexity and diversity of the scene text the convolution with a fixed kernel size has some limitations in extracting the visual features of the scene text. On the other hand most of the existing segmentation-based algorithms only segment the center of the text losing information such as the edges and directions of the text with limited detection accuracy. There are also some improved algorithms that use iterative corrections or introduce other multiple information to improve text detection accuracy but at the expense of efficiency. To address these issues this paper proposes a simple and effective scene text detection method the Kernel Adaptive Convolution which is designed with a Kernel Adaptive Convolution Module for scene text detection via predicting the distance map. Specifically first we design an extensible kernel adaptive convolution module (KACM) to extract visual features from multiple convolutions with different kernel sizes in an adaptive manner. Secondly our method predicts the text distance map under the supervision of a priori information (including direction map and foreground segmentation map) and completes the text detection from the predicted distance map. Experiments on four publicly available datasets prove the effectiveness of our algorithm in which the accuracy and efficiency of both the Total-Text and TD500 outperform the state-of-the-art algorithm. The algorithm efficiency is improved while the accuracy is competitive on ArT and CTW1500.
[]
[]
[]
[]
2,268
2,269
PixelLM: Pixel Reasoning with Large Multimodal Model
http://arxiv.org/abs/2312.02228
Zhongwei Ren, Zhicheng Huang, Yunchao Wei, Yao Zhao, Dongmei Fu, Jiashi Feng, Xiaojie Jin
2,312.02228
While large multimodal models (LMMs) have achieved remarkable progress generating pixel-level masks for image reasoning tasks involving multiple open-world targets remains a challenge. To bridge this gap we introduce PixelLM an effective and efficient LMM for pixel-level reasoning and understanding. Central to PixelLM are a novel lightweight pixel decoder and a comprehensive segmentation codebook. The decoder efficiently produces masks from the hidden embeddings of the codebook tokens which encode detailed target-relevant information. With this design PixelLM harmonizes with the structure of popular LMMs and avoids the need for additional costly segmentation models. Furthermore we propose a token fusion method to enhance the model's ability to differentiate between multiple targets leading to substantially improved mask quality. To advance research in this area we construct MUSE a high-quality multi-target reasoning segmentation benchmark. PixelLM excels across various pixel-level image reasoning and understanding tasks outperforming well-established methods in multiple benchmarks including MUSE and multi-referring segmentation. Comprehensive ablations confirm the efficacy of each proposed component. All code models and datasets will be publicly available.
[]
[]
[]
[]
2,269
2,270
MRFS: Mutually Reinforcing Image Fusion and Segmentation
Hao Zhang, Xuhui Zuo, Jie Jiang, Chunchao Guo, Jiayi Ma
null
This paper proposes a coupled learning framework to break the performance bottleneck of infrared-visible image fusion and segmentation called MRFS. By leveraging the intrinsic consistency between vision and semantics it emphasizes mutual reinforcement rather than treating these tasks as separate issues. First we embed weakened information recovery and salient information integration into the image fusion task employing the CNN-based interactive gated mixed attention (IGM-Att) module to extract high-quality visual features. This aims to satisfy human visual perception producing fused images with rich textures high contrast and vivid colors. Second a transformer-based progressive cycle attention (PC-Att) module is developed to enhance semantic segmentation. It establishes single-modal self-reinforcement and cross-modal mutual complementarity enabling more accurate decisions in machine semantic perception. Then the cascade of IGM-Att and PC-Att couples image fusion and semantic segmentation tasks implicitly bringing vision-related and semantics-related features into closer alignment. Therefore they mutually provide learning priors to each other resulting in visually satisfying fused images and more accurate segmentation decisions. Extensive experiments on public datasets showcase the advantages of our method in terms of visual satisfaction and decision accuracy. The code is publicly available at https://github.com/HaoZhang1018/MRFS.
[]
[]
[]
[]
2,270
2,271
MemoNav: Working Memory Model for Visual Navigation
http://arxiv.org/abs/2402.19161
Hongxin Li, Zeyu Wang, Xu Yang, Yuran Yang, Shuqi Mei, Zhaoxiang Zhang
2,402.19161
Image-goal navigation is a challenging task that requires an agent to navigate to a goal indicated by an image in unfamiliar environments. Existing methods utilizing diverse scene memories suffer from inefficient exploration since they use all historical observations for decision-making without considering the goal-relevant fraction. To address this limitation we present MemoNav a novel memory model for image-goal navigation which utilizes a working memory-inspired pipeline to improve navigation performance. Specifically we employ three types of navigation memory. The node features on a map are stored in the short-term memory (STM) as these features are dynamically updated. A forgetting module then retains the informative STM fraction to increase efficiency. We also introduce long-term memory (LTM) to learn global scene representations by progressively aggregating STM features. Subsequently a graph attention module encodes the retained STM and the LTM to generate working memory (WM) which contains the scene features essential for efficient navigation. The synergy among these three memory types boosts navigation performance by enabling the agent to learn and leverage goal-relevant scene features within a topological map. Our evaluation on multi-goal tasks demonstrates that MemoNav significantly outperforms previous methods across all difficulty levels in both Gibson and Matterport3D scenes. Qualitative results further illustrate that MemoNav plans more efficient routes.
[]
[]
[]
[]
2,271
2,272
Robust Depth Enhancement via Polarization Prompt Fusion Tuning
http://arxiv.org/abs/2404.04318
Kei Ikemura, Yiming Huang, Felix Heide, Zhaoxiang Zhang, Qifeng Chen, Chenyang Lei
2,404.04318
Existing depth sensors are imperfect and may provide inaccurate depth values in challenging scenarios such as in the presence of transparent or reflective objects. In this work we present a general framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors. Previous polarization-based depth enhancement methods focus on utilizing pure physics-based formulas for a single sensor. In contrast our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors. To further improve the performance we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets as the size of the polarization dataset is limited to train a strong model from scratch. We conducted extensive experiments on a public dataset and the results demonstrate that the proposed method performs favorably compared to existing depth enhancement baselines. Code and demos are available at https://lastbasket.github.io/PPFT/.
[]
[]
[]
[]
2,272
2,273
AssistGUI: Task-Oriented PC Graphical User Interface Automation
Difei Gao, Lei Ji, Zechen Bai, Mingyu Ouyang, Peiran Li, Dongxing Mao, Qinchen Wu, Weichen Zhang, Peiyi Wang, Xiangwu Guo, Hengxu Wang, Luowei Zhou, Mike Zheng Shou
null
Graphical User Interface (GUI) automation holds significant promise for assisting users with complex tasks thereby boosting human productivity. Existing works leveraging Large Language Model (LLM) or LLM-based AI agents have shown capabilities in automating tasks on Android and Web platforms. However these tasks are primarily aimed at simple device usage and entertainment operations. This paper presents a novel benchmark AssistGUI to evaluate whether models are capable of manipulating the mouse and keyboard on the Windows platform in response to user-requested tasks. We carefully collected a set of 100 tasks from nine widely-used software applications such as After Effects and MS Word each accompanied by the necessary project files for better evaluation. Moreover we propose a multi-agent collaboration framework which incorporates four agents to perform task decomposition GUI parsing action generation and reflection. Our experimental results reveal that our multi-agent collaboration mechanism outshines existing methods in performance. Nevertheless the potential remains substantial with the best model attaining only a 46% success rate on our benchmark. We conclude with a thorough analysis of the current methods' limitations setting the stage for future breakthroughs in this domain.
[]
[]
[]
[]
2,273
2,274
Adaptive Multi-Modal Cross-Entropy Loss for Stereo Matching
http://arxiv.org/abs/2306.15612
Peng Xu, Zhiyu Xiang, Chengyu Qiao, Jingyun Fu, Tianyu Pu
2,306.15612
Despite the great success of deep learning in stereo matching recovering accurate disparity maps is still challenging. Currently L1 and cross-entropy are the two most widely used losses for stereo network training. Compared with the former the latter usually performs better thanks to its probability modeling and direct supervision to the cost volume. However how to accurately model the stereo ground-truth for cross-entropy loss remains largely under-explored. Existing works simply assume that the ground-truth distributions are uni-modal which ignores the fact that most of the edge pixels can be multi-modal. In this paper a novel adaptive multi-modal cross-entropy loss (ADL) is proposed to guide the networks to learn different distribution patterns for each pixel. Moreover we optimize the disparity estimator to further alleviate the bleeding or misalignment artifacts in inference. Extensive experimental results show that our method is generic and can help classic stereo networks regain state-of-the-art performance. In particular GANet with our method ranks 1st on both the KITTI 2015 and 2012 benchmarks among the published methods. Meanwhile excellent synthetic-to-realistic generalization performance can be achieved by simply replacing the traditional loss with ours. Code is available at https://github.com/xxxupeng/ADL.
[]
[]
[]
[]
2,274
2,275
Unlocking the Potential of Prompt-Tuning in Bridging Generalized and Personalized Federated Learning
http://arxiv.org/abs/2310.18285
Wenlong Deng, Christos Thrampoulidis, Xiaoxiao Li
2,310.18285
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks. This suggests a promising paradigm shift of adapting pre-trained ViT models to Federated Learning (FL) settings. However the challenge of data heterogeneity among FL clients presents a significant hurdle in effectively deploying ViT models. Existing Generalized FL (GFL) and Personalized FL (PFL) methods have limitations in balancing performance across both global and local data distributions. In this paper we present a novel algorithm SGPT that integrates GFL and PFL approaches by employing a unique combination of both shared and group-specific prompts. This design enables SGPT to capture both common and group-specific features. A key feature of SGPT is its prompt selection module which facilitates the training of a single global model capable of automatically adapting to diverse local client data distributions without the need for local fine-tuning. To effectively train the prompts we utilize block coordinate descent (BCD) learning from common feature information (shared prompts) and then more specialized knowledge (group prompts) iteratively. Theoretically we justify that learning the proposed prompts can reduce the gap between global and local performance. Empirically we conduct experiments on both label and feature heterogeneity settings in comparison with state-of-the-art baselines along with extensive ablation studies to substantiate the superior performance of SGPT.
[]
[]
[]
[]
2,275
2,276
Compact 3D Gaussian Representation for Radiance Field
http://arxiv.org/abs/2311.13681
Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, Eunbyung Park
2,311.13681
Neural Radiance Fields (NeRFs) have demonstrated remarkable potential in capturing complex 3D scenes with high fidelity. However one persistent challenge that hinders the widespread adoption of NeRFs is the computational bottleneck due to the volumetric rendering. On the other hand 3D Gaussian splatting (3DGS) has recently emerged as an alternative representation that leverages a 3D Gaussisan-based representation and adopts the rasterization pipeline to render the images rather than volumetric rendering achieving very fast rendering speed and promising image quality. However a significant drawback arises as 3DGS entails a substantial number of 3D Gaussians to maintain the high fidelity of the rendered images which requires a large amount of memory and storage. To address this critical issue we place a specific emphasis on two key objectives: reducing the number of Gaussian points without sacrificing performance and compressing the Gaussian attributes such as view-dependent color and covariance. To this end we propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance. In addition we propose a compact but effective representation of view-dependent color by employing a grid-based neural field rather than relying on spherical harmonics. Finally we learn codebooks to compactly represent the geometric attributes of Gaussian by vector quantization. With model compression techniques such as quantization and entropy coding we consistently show over 25x reduced storage and enhanced rendering speed while maintaining the quality of the scene representation compared to 3DGS. Our work provides a comprehensive framework for 3D scene representation achieving high performance fast training compactness and real-time rendering. Our project page is available at https://maincold2.github.io/c3dgs/.
[]
[]
[]
[]
2,276
2,277
PaSCo: Urban 3D Panoptic Scene Completion with Uncertainty Awareness
http://arxiv.org/abs/2312.02158
Anh-Quan Cao, Angela Dai, Raoul de Charette
2,312.02158
We propose the task of Panoptic Scene Completion (PSC) which extends the recently popular Semantic Scene Completion (SSC) task with instance-level information to produce a richer understanding of the 3D scene. Our PSC proposal utilizes a hybrid mask-based technique on the nonempty voxels from sparse multi-scale completions. Whereas the SSC literature overlooks uncertainty which is critical for robotics applications we instead propose an efficient ensembling to estimate both voxel-wise and instance-wise uncertainties along PSC. This is achieved by building on a multi-input multi-output (MIMO) strategy while improving performance and yielding better uncertainty for little additional compute. Additionally we introduce a technique to aggregate permutation-invariant mask predictions. Our experiments demonstrate that our method surpasses all baselines in both Panoptic Scene Completion and uncertainty estimation on three large-scale autonomous driving datasets. Our code and data are available at https://astra-vision.github.io/PaSCo .
[]
[]
[]
[]
2,277
2,278
GALA: Generating Animatable Layered Assets from a Single Scan
http://arxiv.org/abs/2401.12979
Taeksoo Kim, Byungjun Kim, Shunsuke Saito, Hanbyul Joo
2,401.12979
We present GALA a framework that takes as input a single-layer clothed 3D human mesh and decomposes it into complete multi-layered 3D assets. The outputs can then be combined with other assets to create novel clothed human avatars with any pose. Existing reconstruction approaches often treat clothed humans as a single-layer of geometry and overlook the inherent compositionality of humans with hairstyles clothing and accessories thereby limiting the utility of the meshes for down-stream applications. Decomposing a single-layer mesh into separate layers is a challenging task because it requires the synthesis of plausible geometry and texture for the severely occluded regions. Moreover even with successful decomposition meshes are not normalized in terms of poses and body shapes failing coherent composition with novel identities and poses. To address these challenges we propose to leverage the general knowledge of a pretrained 2D diffusion model as geometry and appearance prior for humans and other assets. We first separate the input mesh using the 3D surface segmentation extracted from multi-view 2D segmentations. Then we synthesize the missing geometry of different layers in both posed and canonical spaces using a novel pose-guided Score Distillation Sampling (SDS) loss. Once we complete inpainting high-fidelity 3D geometry we also apply the same SDS loss to its texture to obtain the complete appearance including the initially occluded regions. Through a series of decomposition steps we obtain multiple layers of 3D assets in a shared canonical space normalized in terms of poses and human shapes hence supporting effortless composition to novel identities and reanimation with novel poses. Our experiments demonstrate the effectiveness of our approach for decomposition canonicalization and composition tasks compared to existing solutions.
[]
[]
[]
[]
2,278
2,279
LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example
http://arxiv.org/abs/2403.15227
Soyeon Yoon, Kwan Yun, Kwanggyoon Seo, Sihun Cha, Jung Eun Yoo, Junyong Noh
2,403.15227
Recent advances in 3D face stylization have made significant strides in few to zero-shot settings. However the degree of stylization achieved by existing methods is often not sufficient for practical applications because they are mostly based on statistical 3D Morphable Models (3DMM) with limited variations. To this end we propose a method that can produce a highly stylized 3D face model with desired topology. Our methods train a surface deformation network with 3DMM and translate its domain to the target style using a paired exemplar. The network achieves stylization of the 3D face mesh by mimicking the style of the target using a differentiable renderer and directional CLIP losses. Additionally during the inference process we utilize a Mesh Agnostic Encoder (MAGE) that takes deformation target a mesh of diverse topologies as input to the stylization process and encodes its shape into our latent space. The resulting stylized face model can be animated by commonly used 3DMM blend shapes. A set of quantitative and qualitative evaluations demonstrate that our method can produce highly stylized face meshes according to a given style and output them in a desired topology. We also demonstrate example applications of our method including image-based stylized avatar generation linear interpolation of geometric styles and facial animation of stylized avatars.
[]
[]
[]
[]
2,279
2,280
Frequency-Adaptive Dilated Convolution for Semantic Segmentation
Linwei Chen, Lin Gu, Dezhi Zheng, Ying Fu
null
Dilated convolution which expands the receptive field by inserting gaps between its consecutive elements is widely employed in computer vision. In this study we propose three strategies to improve individual phases of dilated convolution from the view of spectrum analysis. Departing from the conventional practice of fixing a global dilation rate as a hyperparameter we introduce Frequency-Adaptive Dilated Convolution (FADC) which dynamically adjusts dilation rates spatially based on local frequency components. Subsequently we design two plug-in modules to directly enhance effective bandwidth and receptive field size. The Adaptive Kernel (AdaKern) module decomposes convolution weights into low-frequency and high-frequency components dynamically adjusting the ratio between these components on a per-channel basis. By increasing the high-frequency part of convolution weights AdaKern captures more high-frequency components thereby improving effective bandwidth. The Frequency Selection (FreqSelect) module optimally balances high- and low-frequency components in feature representations through spatially variant reweighting. It suppresses high frequencies in the background to encourage FADC to learn a larger dilation thereby increasing the receptive field for an expanded scope. Extensive experiments on segmentation and object detection consistently validate the efficacy of our approach. The code is made publicly available at https://github.com/Linwei-Chen/FADC.
[]
[]
[]
[]
2,280
2,281
3D Building Reconstruction from Monocular Remote Sensing Images with Multi-level Supervisions
http://arxiv.org/abs/2404.04823
Weijia Li, Haote Yang, Zhenghao Hu, Juepeng Zheng, Gui-Song Xia, Conghui He
2,404.04823
3D building reconstruction from monocular remote sensing images is an important and challenging research problem that has received increasing attention in recent years owing to its low cost of data acquisition and availability for large-scale applications. However existing methods rely on expensive 3D-annotated samples for fully-supervised training restricting their application to large-scale cross-city scenarios. In this work we propose MLS-BRN a multi-level supervised building reconstruction network that can flexibly utilize training samples with different annotation levels to achieve better reconstruction results in an end-to-end manner. To alleviate the demand on full 3D supervision we design two new modules Pseudo Building Bbox Calculator and Roof-Offset guided Footprint Extractor as well as new tasks and training strategies for different types of samples. Experimental results on several public and new datasets demonstrate that our proposed MLS-BRN achieves competitive performance using much fewer 3D-annotated samples and significantly improves the footprint extraction and 3D reconstruction performance compared with current state-of-the-art. The code and datasets of this work will be released at https://github.com/opendatalab/MLS-BRN.git.
[]
[]
[]
[]
2,281
2,282
PhyScene: Physically Interactable 3D Scene Synthesis for Embodied AI
http://arxiv.org/abs/2404.09465
Yandan Yang, Baoxiong Jia, Peiyuan Zhi, Siyuan Huang
2,404.09465
With recent developments in Embodied Artificial Intelligence (EAI) research there has been a growing demand for high-quality large-scale interactive scene generation. While prior methods in scene synthesis have prioritized the naturalness and realism of the generated scenes the physical plausibility and interactivity of scenes have been largely left unexplored. To address this disparity we introduce PhyScene a novel method dedicated to generating interactive 3D scenes characterized by realistic layouts articulated objects and rich physical interactivity tailored for embodied agents. Based on a conditional diffusion model for capturing scene layouts we devise novel physics- and interactivity-based guidance mechanisms that integrate constraints from object collision room layout and object reachability. Through extensive experiments we demonstrate that PhyScene effectively leverages these guidance functions for physically interactable scene synthesis outperforming existing state-of-the-art scene synthesis methods by a large margin. Our findings suggest that the scenes generated by PhyScene hold considerable potential for facilitating diverse skill acquisition among agents within interactive environments thereby catalyzing further advancements in embodied AI research.
[]
[]
[]
[]
2,282
2,283
Generative Latent Coding for Ultra-Low Bitrate Image Compression
Zhaoyang Jia, Jiahao Li, Bin Li, Houqiang Li, Yan Lu
null
Most existing image compression approaches perform transform coding in the pixel space to reduce its spatial redundancy. However they encounter difficulties in achieving both high-realism and high-fidelity at low bitrate as the pixel-space distortion may not align with human perception. To address this issue we introduce a Generative Latent Coding (GLC) architecture which performs transform coding in the latent space of a generative vector-quantized variational auto-encoder (VQ-VAE) instead of in the pixel space. The generative latent space is characterized by greater sparsity richer semantic and better alignment with human perception rendering it advantageous for achieving high-realism and high-fidelity compression. Additionally we introduce a categorical hyper module to reduce the bit cost of hyper-information and a code-prediction-based supervision to enhance the semantic consistency. Experiments demonstrate that our GLC maintains high visual quality with less than 0.04 bpp on natural images and less than 0.01 bpp on facial images. On the CLIC2020 test set we achieve the same FID as MS-ILLM with 45% fewer bits. Furthermore the powerful generative latent space enables various applications built on our GLC pipeline such as image restoration and style transfer.
[]
[]
[]
[]
2,283
2,284
Multiple View Geometry Transformers for 3D Human Pose Estimation
http://arxiv.org/abs/2311.10983
Ziwei Liao, Jialiang Zhu, Chunyu Wang, Han Hu, Steven L. Waslander
2,311.10983
In this work we aim to improve the 3D reasoning ability of Transformers in multi-view 3D human pose estimation. Recent works have focused on end-to-end learning-based transformer designs which struggle to resolve geometric information accurately particularly during occlusion. Instead we propose a novel hybrid model MVGFormer which has a series of geometric and appearance modules organized in an iterative manner. The geometry modules are learning-free and handle all viewpoint-dependent 3D tasks geometrically which notably improves the model's generalization ability. The appearance modules are learnable and are dedicated to estimating 2D poses from image signals end-to-end which enables them to achieve accurate estimates even when occlusion occurs leading to a model that is both accurate and generalizable to new cameras and geometries. We evaluate our approach for both in-domain and out-of-domain settings where our model consistently outperforms state-of-the-art methods and especially does so by a significant margin in the out-of-domain setting. We will release the code and models: https://github.com/XunshanMan/MVGFormer.
[]
[]
[]
[]
2,284
2,285
SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion
http://arxiv.org/abs/2311.15855
Hsuan- I Ho, Jie Song, Otmar Hilliges
2,311.15855
A long-standing goal of 3D human reconstruction is to create lifelike and fully detailed 3D humans from single-view images. The main challenge lies in inferring unknown body shapes appearances and clothing details in areas not visible in the images. To address this we propose SiTH a novel pipeline that uniquely integrates an image-conditioned diffusion model into a 3D mesh reconstruction workflow. At the core of our method lies the decomposition of the challenging single-view reconstruction problem into generative hallucination and reconstruction subproblems. For the former we employ a powerful generative diffusion model to hallucinate unseen back-view appearance based on the input images. For the latter we leverage skinned body meshes as guidance to recover full-body texture meshes from the input and back-view images. SiTH requires as few as 500 3D human scans for training while maintaining its generality and robustness to diverse images. Extensive evaluations on two 3D human benchmarks including our newly created one highlighted our method's superior accuracy and perceptual quality in 3D textured human reconstruction. Our code and evaluation benchmark is available at https://ait.ethz.ch/sith.
[]
[]
[]
[]
2,285
2,286
Distributionally Generative Augmentation for Fair Facial Attribute Classification
http://arxiv.org/abs/2403.06606
Fengda Zhang, Qianpei He, Kun Kuang, Jiashuo Liu, Long Chen, Chao Wu, Jun Xiao, Hanwang Zhang
2,403.06606
Facial Attribute Classification (FAC) holds substantial promise in widespread applications. However FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations. This unfairness is largely attributed to bias in data where some spurious attributes (e.g. Male) statistically correlate with the target attribute (e.g. Smiling). Most of existing fairness-aware methods rely on the labels of spurious attributes which may be unavailable in practice. This work proposes a novel generation-based two-stage framework to train a fair FAC model on biased data without additional annotation. Initially we identify the potential spurious attributes based on generative models. Notably it enhances interpretability by explicitly showing the spurious attributes in image space. Following this for each image we first edit the spurious attributes with a random degree sampled from a uniform distribution while keeping target attribute unchanged. Then we train a fair FAC model by fostering model invariance to these augmentation. Extensive experiments on three common datasets demonstrate the effectiveness of our method in promoting fairness in FAC without compromising accuracy. Codes are in https://github.com/heqianpei/DiGA.
[]
[]
[]
[]
2,286
2,287
DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing
Jia-Wei Liu, Yan-Pei Cao, Jay Zhangjie Wu, Weijia Mao, Yuchao Gu, Rui Zhao, Jussi Keppo, Ying Shan, Mike Zheng Shou
null
Despite recent progress in diffusion-based video editing existing methods are limited to short-length videos due to the contradiction between long-range consistency and frame-wise editing. Prior attempts to address this challenge by introducing video-2D representations encounter significant difficulties with large motion- and view-change videos especially in human-centric scenarios. To overcome this we propose to introduce the dynamic Neural Radiance Fields (NeRF) as the innovative video representation where the editing can be performed in the 3D spaces and propagated to the entire video via the deformation field. To provide consistent and controllable editing we propose the image-based video-NeRF editing pipeline with a set of innovative designs including multi-view multi-pose Score Distillation Sampling (SDS) from both the 2D personalized diffusion prior and 3D diffusion prior reconstruction losses text-guided local parts super-resolution and style transfer. Extensive experiments demonstrate that our method dubbed as DynVideo-E significantly outperforms SOTA approaches on two challenging datasets by a large margin of 50% 95% for human preference. Code will be released at https://showlab.github.io/DynVideo-E/.
[]
[]
[]
[]
2,287
2,288
Real-Time Neural BRDF with Spherically Distributed Primitives
http://arxiv.org/abs/2310.08332
Yishun Dou, Zhong Zheng, Qiaoqiao Jin, Bingbing Ni, Yugang Chen, Junxiang Ke
2,310.08332
We propose a neural reflectance model (NeuBRDF) that offers highly versatile material representation yet with light memory and neural computation consumption towards achieving real-time rendering. The results depicted in Fig. 1 rendered at full HD resolution on a contemporary desktop machine demonstrate that our system achieves real-time performance with a wide variety of appearances which is approached by the following two designs. Firstly recognizing that the bidirectional reflectance is distributed in a sparse high-dimensional space we propose to project the BRDF into two low-dimensional components i.e. two hemisphere feature-grids for incoming and outgoing directions respectively. Secondly we distribute learnable neural reflectance primitives on our highly-tailored spherical surface grid. These primitives offer informative features for each hemisphere component and reduce the complexity of the feature learning network leading to fast evaluation. These primitives are centrally stored in a codebook and can be shared across multiple grids and even across materials based on low-cost indices stored in material-specific spherical surface grids. Our NeuBRDF agnostic to the material provides a unified framework for representing a variety of materials consistently. Comprehensive experimental results on measured BRDF compression Monte Carlo simulated BRDF acceleration and extension to spatially varying effects demonstrate the superior quality and generalizability achieved by the proposed scheme.
[]
[]
[]
[]
2,288
2,289
Harnessing Meta-Learning for Improving Full-Frame Video Stabilization
http://arxiv.org/abs/2403.03662
Muhammad Kashif Ali, Eun Woo Im, Dongjin Kim, Tae Hyun Kim
2,403.03662
Video stabilization is a longstanding computer vision problem particularly pixel-level synthesis solutions for video stabilization which synthesize full frames add to the complexity of this task. These techniques aim to stabilize videos by synthesizing full frames while enhancing the stability of the considered video. This intensifies the complexity of the task due to the distinct mix of unique motion profiles and visual content present in each video sequence making robust generalization with fixed parameters difficult. In our study we introduce a novel approach to enhance the performance of pixel-level synthesis solutions for video stabilization by adapting these models to individual input video sequences. The proposed adaptation exploits low-level visual cues accessible during test-time to improve both the stability and quality of resulting videos. We highlight the efficacy of our methodology of "test-time adaptation" through simple fine-tuning of one of these models followed by significant stability gain via the integration of meta-learning techniques. Notably significant improvement is achieved with only a single adaptation step. The versatility of the proposed algorithm is demonstrated by consistently improving the performance of various pixel-level synthesis models for video stabilization in real-world scenarios.
[]
[]
[]
[]
2,289
2,290
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
http://arxiv.org/abs/2401.09047
Haoxin Chen, Yong Zhang, Xiaodong Cun, Menghan Xia, Xintao Wang, Chao Weng, Ying Shan
2,401.09047
Text-to-video generation aims to produce a video based on a given prompt. Recently several commercial video models have been able to generate plausible videos with minimal noise excellent details and high aesthetic scores. However these models rely on large-scale well-filtered high-quality videos that are not accessible to the community. Many existing research works which train models using the low-quality WebVid-10M dataset struggle to generate high-quality videos because the models are optimized to fit WebVid-10M. In this work we explore the training scheme of video models extended from Stable Diffusion and investigate the feasibility of leveraging low-quality videos and synthesized high-quality images to obtain a high-quality video model. We first analyze the connection between the spatial and temporal modules of video models and the distribution shift to low-quality videos. We observe that full training of all modules results in a stronger coupling between spatial and temporal modules than only training temporal modules. Based on this stronger coupling we shift the distribution to higher quality without motion degradation by finetuning spatial modules with high-quality images resulting in a generic high-quality video model. Evaluations are conducted to demonstrate the superiority of the proposed method particularly in picture quality motion and concept composition.
[]
[]
[]
[]
2,290
2,291
From SAM to CAMs: Exploring Segment Anything Model for Weakly Supervised Semantic Segmentation
Hyeokjun Kweon, Kuk-Jin Yoon
null
Weakly Supervised Semantic Segmentation (WSSS) aims to learn the concept of segmentation using image-level class labels. Recent WSSS works have shown promising results by using the Segment Anything Model (SAM) a foundation model for segmentation during the inference phase. However we observe that these methods can still be vulnerable to the noise of class activation maps (CAMs) serving as initial seeds. As a remedy this paper introduces From-SAM-to-CAMs (S2C) a novel WSSS framework that directly transfers the knowledge of SAM to the classifier during the training process enhancing the quality of CAMs itself. S2C comprises SAM-segment Contrasting (SSC) and a CAM-based prompting module (CPM) which exploit SAM at the feature and logit levels respectively. SSC performs prototype-based contrasting using SAM's automatic segmentation results. It constrains each feature to be close to the prototype of its segment and distant from prototypes of the others. Meanwhile CPM extracts prompts from the CAM of each class and uses them to generate class-specific segmentation masks through SAM. The masks are aggregated into unified self-supervision based on the confidence score designed to consider the reliability of both SAM and CAMs. S2C achieves a new state-of-the-art performance across all benchmarks outperforming existing studies by significant margins. The code is available at https://github.com/sangrockEG/S2C.
[]
[]
[]
[]
2,291
2,292
Boosting Flow-based Generative Super-Resolution Models via Learned Prior
http://arxiv.org/abs/2403.10988
Li-Yuan Tsao, Yi-Chen Lo, Chia-Che Chang, Hao-Wei Chen, Roy Tseng, Chien Feng, Chun-Yi Lee
2,403.10988
Flow-based super-resolution (SR) models have demonstrated astonishing capabilities in generating high-quality images. However these methods encounter several challenges during image generation such as grid artifacts exploding inverses and suboptimal results due to a fixed sampling temperature. To overcome these issues this work introduces a conditional learned prior to the inference phase of a flow-based SR model. This prior is a latent code predicted by our proposed latent module conditioned on the low-resolution image which is then transformed by the flow model into an SR image. Our framework is designed to seamlessly integrate with any contemporary flow-based SR model without modifying its architecture or pre-trained weights. We evaluate the effectiveness of our proposed framework through extensive experiments and ablation analyses. The proposed framework successfully addresses all the inherent issues in flow-based SR models and enhances their performance in various SR scenarios. Our code is available at: https://github.com/liyuantsao/FlowSR-LP
[]
[]
[]
[]
2,292
2,293
How to Handle Sketch-Abstraction in Sketch-Based Image Retrieval?
http://arxiv.org/abs/2403.07203
Subhadeep Koley, Ayan Kumar Bhunia, Aneeshan Sain, Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song
2,403.07203
In this paper we propose a novel abstraction-aware sketch-based image retrieval framework capable of handling sketch abstraction at varied levels. Prior works had mainly focused on tackling sub-factors such as drawing style and order we instead attempt to model abstraction as a whole and propose feature-level and retrieval granularity-level designs so that the system builds into its DNA the necessary means to interpret abstraction. On learning abstraction-aware features we for the first-time harness the rich semantic embedding of pre-trained StyleGAN model together with a novel abstraction-level mapper that deciphers the level of abstraction and dynamically selects appropriate dimensions in the feature matrix correspondingly to construct a feature matrix embedding that can be freely traversed to accommodate different levels of abstraction. For granularity-level abstraction understanding we dictate that the retrieval model should not treat all abstraction-levels equally and introduce a differentiable surrogate Acc.@q loss to inject that understanding into the system. Different to the gold-standard triplet loss our Acc.@q loss uniquely allows a sketch to narrow/broaden its focus in terms of how stringent the evaluation should be - the more abstract a sketch the less stringent (higher q). Extensive experiments depict our method to outperform existing state-of-the-arts in standard SBIR tasks along with challenging scenarios like early retrieval forensic sketch-photo matching and style-invariant retrieval.
[]
[]
[]
[]
2,293
2,294
What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs
http://arxiv.org/abs/2401.02411
Alex Trevithick, Matthew Chan, Towaki Takikawa, Umar Iqbal, Shalini De Mello, Manmohan Chandraker, Ravi Ramamoorthi, Koki Nagano
2,401.02411
3D-aware Generative Adversarial Networks (GANs) have shown remarkable progress in learning to generate multi-view-consistent images and 3D geometries of scenes from collections of 2D images via neural volume rendering. Yet the significant memory and computational costs of dense sampling in volume rendering have forced 3D GANs to adopt patch-based training or employ low-resolution rendering with post-processing 2D super resolution which sacrifices multiview consistency and the quality of resolved geometry. Consequently 3D GANs have not yet been able to fully resolve the rich 3D geometry present in 2D images. In this work we propose techniques to scale neural volume rendering to the much higher resolution of native 2D images thereby resolving fine-grained 3D geometry with unprecedented detail. Our approach employs learning-based samplers for accelerating neural rendering for 3D GAN training using up to 5 times fewer depth samples. This enables us to explicitly "render every pixel" of the full-resolution image during training and inference without post-processing superresolution in 2D. Together with our strategy to learn high-quality surface geometry our method synthesizes high-resolution 3D geometry and strictly view-consistent images while maintaining image quality on par with baselines relying on post-processing super resolution. We demonstrate state-of-the-art 3D gemetric quality on FFHQ and AFHQ setting a new standard for unsupervised learning of 3D shapes in 3D GANs.
[]
[]
[]
[]
2,294
2,295
Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer
http://arxiv.org/abs/2312.09008
Jiwoo Chung, Sangeek Hyun, Jae-Pil Heo
2,312.09008
Despite the impressive generative capabilities of diffusion models existing diffusion model-based style transfer methods require inference-stage optimization (e.g. fine-tuning or textual inversion of style) which is time-consuming or fails to leverage the generative ability of large-scale diffusion models. To address these issues we introduce a novel artistic style transfer method based on a pre-trained large-scale diffusion model without any optimization. Specifically we manipulate the features of self-attention layers as the way the cross-attention mechanism works; in the generation process substituting the key and value of content with those of style image. This approach provides several desirable characteristics for style transfer including 1) preservation of content by transferring similar styles into similar image patches and 2) transfer of style based on similarity of local texture (e.g. edge) between content and style images. Furthermore we introduce query preservation and attention temperature scaling to mitigate the issue of disruption of original content and initial latent Adaptive Instance Normalization (AdaIN) to deal with the disharmonious color (failure to transfer the colors of style). Our experimental results demonstrate that our proposed method surpasses state-of-the-art methods in both conventional and diffusion-based style transfer baselines.
[]
[]
[]
[]
2,295
2,296
Towards Robust Learning to Optimize with Theoretical Guarantees
Qingyu Song, Wei Lin, Juncheng Wang, Hong Xu
null
Learning to optimize (L2O) is an emerging technique to solve mathematical optimization problems with learning-based methods. Although with great success in many real-world scenarios such as wireless communications computer networks and electronic design existing L2O works lack theoretical demonstration of their performance and robustness in out-of-distribution (OOD) scenarios. We address this gap by providing comprehensive proofs. First we prove a sufficient condition for a robust L2O model with homogeneous convergence rates over all In-Distribution (InD) instances. We assume an L2O model achieves robustness for an InD scenario. Based on our proposed methodology of aligning OOD problems to InD problems we also demonstrate that the L2O model's convergence rate in OOD scenarios will deteriorate by an equation of the L2O model's input features. Moreover we propose an L2O model with a concise gradient-only feature construction and a novel gradient-based history modeling method. Numerical simulation demonstrates that our proposed model outperforms the state-of-the-art baseline in both InD and OOD scenarios and achieves up to 10 xconvergence speedup. The code of our method can be found from https://github.com/NetX-lab/GoMathL2O-Official.
[]
[]
[]
[]
2,296
2,297
Differentiable Neural Surface Refinement for Modeling Transparent Objects
Weijian Deng, Dylan Campbell, Chunyi Sun, Shubham Kanitkar, Matthew E. Shaffer, Stephen Gould
null
Neural implicit surface reconstruction leveraging volume rendering has led to significant advances in multi-view reconstruction. However results for transparent objects can be very poor primarily because the rendering function fails to account for the intricate light transport induced by refraction and reflection. In this study we introduce transparent neural surface refinement (TNSR) a novel surface reconstruction framework that explicitly incorporates physical refraction and reflection tracing. Beginning with an initial approximate surface our method employs sphere tracing combined with Snell's law to cast both reflected and refracted rays. Central to our proposal is an innovative differentiable technique devised to allow signals from the photometric evidence to propagate back to the surface model by considering how the surface bends and reflects light rays. This allows us to connect surface refinement with volume rendering enabling end-to-end optimization solely on multi-view RGB images. In our experiments TNSR demonstrates significant improvements in novel view synthesis and geometry estimation of transparent objects without prior knowledge of the refractive index.
[]
[]
[]
[]
2,297
2,298
OrthCaps: An Orthogonal CapsNet with Sparse Attention Routing and Pruning
http://arxiv.org/abs/2403.13351
Xinyu Geng, Jiaming Wang, Jiawei Gong, Yuerong Xue, Jun Xu, Fanglin Chen, Xiaolin Huang
2,403.13351
Redundancy is a persistent challenge in Capsule Networks (CapsNet) leading to high computational costs and parameter counts. Although previous studies have introduced pruning after the initial capsule layer dynamic routing's fully connected nature and non-orthogonal weight matrices reintroduce redundancy in deeper layers. Besides dynamic routing requires iterating to converge further increasing computational demands. In this paper we propose an Orthogonal Capsule Network (OrthCaps) to reduce redundancy improve routing performance and decrease parameter counts. Firstly an efficient pruned capsule layer is introduced to discard redundant capsules. Secondly dynamic routing is replaced with orthogonal sparse attention routing eliminating the need for iterations and fully connected structures. Lastly weight matrices during routing are orthogonalized to sustain low capsule similarity which is the first approach to use Householder orthogonal decomposition to enforce orthogonality in CapsNet. Our experiments on baseline datasets affirm the efficiency and robustness of OrthCaps in classification tasks in which ablation studies validate the criticality of each component. OrthCaps-Shallow outperforms other Capsule Network benchmarks on four datasets utilizing only 110k parameters - a mere 1.25% of a standard Capsule Network's total. To the best of our knowledge it achieves the smallest parameter count among existing Capsule Networks. Similarly OrthCaps-Deep demonstrates competitive performance across four datasets utilizing only 1.2% of the parameters required by its counterparts.
[]
[]
[]
[]
2,298
2,299
ProS: Prompting-to-simulate Generalized knowledge for Universal Cross-Domain Retrieval
http://arxiv.org/abs/2312.12478
Kaipeng Fang, Jingkuan Song, Lianli Gao, Pengpeng Zeng, Zhi-Qi Cheng, Xiyao Li, Heng Tao Shen
2,312.12478
The goal of Universal Cross-Domain Retrieval (UCDR) is to achieve robust performance in generalized test scenarios wherein data may belong to strictly unknown domains and categories during training. Recently pre-trained models with prompt tuning have shown strong generalization capabilities and attained noteworthy achievements in various downstream tasks such as few-shot learning and video-text retrieval. However applying them directly to UCDR may not be sufficient to handle both domain shift (i.e. adapting to unfamiliar domains) and semantic shift (i.e. transferring to unknown categories). To this end we propose Prompting-to-Simulate (ProS) the first method to apply prompt tuning for UCDR. ProS employs a two-step process to simulate Content-aware Dynamic Prompts (CaDP) which can impact models to produce generalized features for UCDR. Concretely in Prompt Units Learning stage we introduce two Prompt Units to individually capture domain and semantic knowledge in a mask-and-align way. Then in Context-aware Simulator Learning stage we train a Content-aware Prompt Simulator under a simulated test scenario to produce the corresponding CaDP. Extensive experiments conducted on three benchmark datasets show that our method achieves new state-of-the-art performance without bringing excessive parameters. Code is available at https://github.com/fangkaipeng/ProS
[]
[]
[]
[]
2,299