Search is not available for this dataset
title
string | authors
string | abstract
string | pdf
string | arXiv
string | video
string | bibtex
string | url
string | detail_url
string | tags
string | supp
string | dataset
string | string |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Non-Adversarial Video Synthesis With Learned Priors | Abhishek Aich, Akash Gupta, Rameswar Panda, Rakib Hyder, M. Salman Asif, Amit K. Roy-Chowdhury | Most of the existing works in video synthesis focus on generating videos using adversarial learning. Despite their success, these methods often require input reference frame or fail to generate diverse videos from the given data distribution, with little to no uniformity in the quality of videos that can be generated. Different from these methods, we focus on the problem of generating videos from latent noise vectors, without any reference input frames. To this end, we develop a novel approach that jointly optimizes the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning. Optimizing for the input latent space along with the network weights allows us to generate videos in a controlled environment, i.e., we can faithfully generate all videos the model has seen during the learning process as well as new unseen videos. Extensive experiments on three challenging and diverse datasets well demonstrate that our proposed approach generates superior quality videos compared to the existing state-of-the-art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Aich_Non-Adversarial_Video_Synthesis_With_Learned_Priors_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.09565 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Aich_Non-Adversarial_Video_Synthesis_With_Learned_Priors_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Aich_Non-Adversarial_Video_Synthesis_With_Learned_Priors_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Aich_Non-Adversarial_Video_Synthesis_CVPR_2020_supplemental.pdf | null | null |
Deep Homography Estimation for Dynamic Scenes | Hoang Le, Feng Liu, Shu Zhang, Aseem Agarwala | Homography estimation is an important step in many computer vision problems. Recently, deep neural network methods have shown to be favorable for this problem when compared to traditional methods. However, these new methods do not consider dynamic content in input images. They train neural networks with only image pairs that can be perfectly aligned using homographies. This paper investigates and discusses how to design and train a deep neural network that handles dynamic scenes. We first collect a large video dataset with dynamic content. We then develop a multi-scale neural network and show that when properly trained using our new dataset, this neural network can already handle dynamic scenes to some extent. To estimate a homography of a dynamic scene in a more principled way, we need to identify the dynamic content. Since dynamic content detection and homography estimation are two tightly coupled tasks, we follow the multi-task learning principles and augment our multi-scale network such that it jointly estimates the dynamics masks and homographies. Our experiments show that our method can robustly estimate homography for challenging scenarios with dynamic scenes, blur artifacts, or lack of textures. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Le_Deep_Homography_Estimation_for_Dynamic_Scenes_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.02132 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Le_Deep_Homography_Estimation_for_Dynamic_Scenes_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Le_Deep_Homography_Estimation_for_Dynamic_Scenes_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Where Does It End? - Reasoning About Hidden Surfaces by Object Intersection Constraints | Michael Strecke, Jorg Stuckler | Dynamic scene understanding is an essential capability in robotics and VR/AR. In this paper we propose Co-Section, an optimization-based approach to 3D dynamic scene reconstruction, which infers hidden shape information from intersection constraints. An object-level dynamic SLAM frontend detects, segments, tracks and maps dynamic objects in the scene. Our optimization backend completes the shapes using hull and intersection constraints between the objects. In experiments, we demonstrate our approach on real and synthetic dynamic scene datasets. We also assess the shape completion performance of our method quantitatively. To the best of our knowledge, our approach is the first method to incorporate such physical plausibility constraints on object intersections for shape completion of dynamic objects in an energy minimization framework. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Strecke_Where_Does_It_End_-_Reasoning_About_Hidden_Surfaces_by_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=BioNf0ymNuE | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Strecke_Where_Does_It_End_-_Reasoning_About_Hidden_Surfaces_by_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Strecke_Where_Does_It_End_-_Reasoning_About_Hidden_Surfaces_by_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Strecke_Where_Does_It_CVPR_2020_supplemental.zip | null | null |
Epipolar Transformers | Yihui He, Rui Yan, Katerina Fragkiadaki, Shoou-I Yu | A common approach to localize 3D human joints in a synchronized and calibrated multi-view setup consists of two-steps: (1) apply a 2D detector separately on each view to localize joints in 2D, and (2) perform robust triangulation on 2D detections from each view to acquire the 3D joint locations. However, in step 1, the 2D detector is limited to solving challenging cases which could potentially be better resolved in 3D, such as occlusions and oblique viewing angles, purely in 2D without leveraging any 3D information. Therefore, we propose the differentiable "epipolar transformer", which enables the 2D detector to leverage 3D-aware features to improve 2D pose estimation. The intuition is: given a 2D location p in the current view, we would like to first find its corresponding point p' in a neighboring view, and then combine the features at p' with the features at p, thus leading to a 3D-aware feature at p. Inspired by stereo matching, the epipolar transformer leverages epipolar constraints and feature matching to approximate the features at p'. Experiments on InterHand and Human3.6M show that our approach has consistent improvements over the baselines. Specifically, in the condition where no external data is used, our Human3.6M model trained with ResNet-50 backbone and image size 256 x 256 outperforms state-of-the-art by 4.23 mm and achieves MPJPE 26.9 mm. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/He_Epipolar_Transformers_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.04551 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/He_Epipolar_Transformers_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/He_Epipolar_Transformers_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/He_Epipolar_Transformers_CVPR_2020_supplemental.pdf | null | null |
Correlating Edge, Pose With Parsing | Ziwei Zhang, Chi Su, Liang Zheng, Xiaodong Xie | According to existing studies, human body edge and pose are two beneficial factors to human parsing. The effectiveness of each of the high-level features (edge and pose) is confirmed through the concatenation of their features with the parsing features. Driven by the insights, this paper studies how human semantic boundaries and keypoint locations can jointly improve human parsing. Compared with the existing practice of feature concatenation, we find that uncovering the correlation among the three factors is a superior way of leveraging the pivotal contextual cues provided by edges and poses. To capture such correlations, we propose a Correlation Parsing Machine (CorrPM) employing a heterogeneous non-local block to discover the spatial affinity among feature maps from the edge, pose and parsing. The proposed CorrPM allows us to report new state-of-the-art accuracy on three human parsing datasets. Importantly, comparative studies confirm the advantages of feature correlation over the concatenation. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Correlating_Edge_Pose_With_Parsing_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.01431 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Correlating_Edge_Pose_With_Parsing_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Correlating_Edge_Pose_With_Parsing_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Relative Interior Rule in Block-Coordinate Descent | Tomas Werner, Daniel Prusa, Tomas Dlask | It is well-known that for general convex optimization problems, block-coordinate descent can get stuck in poor local optima. Despite that, versions of this method known as convergent message passing are very successful to approximately solve the dual LP relaxation of the MAP inference problem in graphical models. In attempt to identify the reason why these methods often achieve good local minima, we argue that if in block-coordinate descent the set of minimizers over a variable block has multiple elements, one should choose an element from the relative interior of this set. We show that this rule is not worse than any other rule for choosing block-minimizers. Based on this observation, we develop a theoretical framework for block-coordinate descent applied to general convex problems. We illustrate this theory on convergent message-passing methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Werner_Relative_Interior_Rule_in_Block-Coordinate_Descent_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Werner_Relative_Interior_Rule_in_Block-Coordinate_Descent_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Werner_Relative_Interior_Rule_in_Block-Coordinate_Descent_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Werner_Relative_Interior_Rule_CVPR_2020_supplemental.pdf | null | null |
Controllable Person Image Synthesis With Attribute-Decomposed GAN | Yifang Men, Yiming Mao, Yuning Jiang, Wei-Ying Ma, Zhouhui Lian | This paper introduces the Attribute-Decomposed GAN, a novel generative model for controllable person image synthesis, which can produce realistic person images with desired human attributes (e.g., pose, head, upper clothes and pants) provided in various source inputs. The core idea of the proposed model is to embed human attributes into the latent space as independent codes and thus achieve flexible and continuous control of attributes via mixing and interpolation operations in explicit style representations. Specifically, a new architecture consisting of two encoding pathways with style block connections is proposed to decompose the original hard mapping into multiple more accessible subtasks. In source pathway, we further extract component layouts with an off-the-shelf human parser and feed them into a shared global texture encoder for decomposed latent codes. This strategy allows for the synthesis of more realistic output images and automatic separation of un-annotated attributes. Experimental results demonstrate the proposed method's superiority over the state of the art in pose transfer and its effectiveness in the brand-new task of component attribute transfer. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Men_Controllable_Person_Image_Synthesis_With_Attribute-Decomposed_GAN_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.12267 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Men_Controllable_Person_Image_Synthesis_With_Attribute-Decomposed_GAN_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Men_Controllable_Person_Image_Synthesis_With_Attribute-Decomposed_GAN_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Men_Controllable_Person_Image_CVPR_2020_supplemental.pdf | null | null |
Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping | Ran Yi, Yong-Jin Liu, Yu-Kun Lai, Paul L. Rosin | Portrait drawing is a common form of art with high abstraction and expressiveness. Due to its unique characteristics, existing methods achieve decent results only with paired training data, which is costly and time-consuming to obtain.In this paper, we address the problem of automatic transfer from face photos to portrait drawings with unpaired training data. We observe that due to the significant imbalance of information richness between photos and drawings, existing unpaired transfer methods such as CycleGAN tends to embed invisible reconstruction information indiscriminately in the whole drawings, leading to important facial features partially missing in drawings. To address this problem, we propose a novel asymmetric cycle mapping that enforces the reconstruction information to be visible (by a truncation loss) and only embedded in selective facial regions (by a relaxed forward cycle-consistency loss). Along with localized discriminators for the eyes, nose and lips, our method well preserves all important facial features in the generated portrait drawings. By introducing a style classifier and taking the style vector into account, our method can learn to generate portrait drawings in multiple styles using a single network. Extensive experiments show that our model outperforms state-of-the-art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yi_Unpaired_Portrait_Drawing_Generation_via_Asymmetric_Cycle_Mapping_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yi_Unpaired_Portrait_Drawing_Generation_via_Asymmetric_Cycle_Mapping_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yi_Unpaired_Portrait_Drawing_Generation_via_Asymmetric_Cycle_Mapping_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yi_Unpaired_Portrait_Drawing_CVPR_2020_supplemental.pdf | null | null |
Advancing High Fidelity Identity Swapping for Forgery Detection | Lingzhi Li, Jianmin Bao, Hao Yang, Dong Chen, Fang Wen | In this work, we study various existing benchmarks for deepfake detection researches. In particular, we examine a novel two-stage face swapping algorithm, called FaceShifter, for high fidelity and occlusion aware face swapping. Unlike many existing face swapping works that leverage only limited information from the target image when synthesizing the swapped face, FaceShifter generates the swapped face with high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively. FaceShifter can handle facial occlusions with a second synthesis stage consisting of a Heuristic Error Acknowledging Refinement Network (HEAR-Net), which is trained to recover anomaly regions in a self-supervised way without any manual annotations. Experiments show that existing deepfake detection algorithm performs poorly with FaceShifter, since it achieves advantageous quality over all existing benchmarks. However, our newly developed Face X-Ray method can reliably detect forged images created by FaceShifter. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Advancing_High_Fidelity_Identity_Swapping_for_Forgery_Detection_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Advancing_High_Fidelity_Identity_Swapping_for_Forgery_Detection_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Advancing_High_Fidelity_Identity_Swapping_for_Forgery_Detection_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
BachGAN: High-Resolution Image Synthesis From Salient Object Layout | Yandong Li, Yu Cheng, Zhe Gan, Licheng Yu, Liqiang Wang, Jingjing Liu | We propose a new task towards more practical applications for image generation - high-quality image synthesis from salient object layout. This new setting requires users to provide only the layout of salient objects (i.e., foreground bounding boxes and categories) and lets the model complete the drawing with an invented background and a matching foreground. Two main challenges spring from this new task: (i) how to generate fine-grained details and realistic textures without segmentation map input; and (ii) how to create and weave a background into standalone objects in a seamless way. To tackle this, we propose Background Hallucination Generative Adversarial Network (BachGAN), which leverages a background retrieval module to first select a set of segmentation maps from a large candidate pool, then encodes these candidate layouts via a background fusion module to hallucinate a suitable background for the given objects. By generating the hallucinated background representation dynamically, our model can synthesize high-resolution images with both photo-realistic foreground and integral background. Experiments on Cityscapes and ADE20K datasets demonstrate the advantage of BachGAN over existing approaches, measured on both visual fidelity of generated images and visual alignment between output images and input layouts. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_BachGAN_High-Resolution_Image_Synthesis_From_Salient_Object_Layout_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.11690 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_BachGAN_High-Resolution_Image_Synthesis_From_Salient_Object_Layout_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_BachGAN_High-Resolution_Image_Synthesis_From_Salient_Object_Layout_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness | Philipp Terhorst, Jan Niklas Kolf, Naser Damer, Florian Kirchbuchner, Arjan Kuijper | Face image quality is an important factor to enable high-performance face recognition systems. Face quality assessment aims at estimating the suitability of a face image for the purpose of recognition. Previous work proposed supervised solutions that require artificially or human labelled quality values. However, both labelling mechanisms are error prone as they do not rely on a clear definition of quality and may not know the best characteristics for the utilized face recognition system. Avoiding the use of inaccurate quality labels, we proposed a novel concept to measure face quality based on an arbitrary face recognition model. By determining the embedding variations generated from random subnetworks of a face model, the robustness of a sample representation and thus, its quality is estimated. The experiments are conducted in a cross-database evaluation setting on three publicly available databases. We compare our proposed solution on two face embeddings against six state-of-the-art approaches from academia and industry. The results show that our unsupervised solution outperforms all other approaches in the majority of the investigated scenarios. In contrast to previous works, the proposed solution shows a stable performance over all scenarios. Utilizing the deployed face recognition model for our face quality assessment methodology avoids the training phase completely and further outperforms all baseline approaches by a large margin. Our solution can be easily integrated into current face recognition systems, and can be modified to other tasks beyond face recognition. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Terhorst_SER-FIQ_Unsupervised_Estimation_of_Face_Image_Quality_Based_on_Stochastic_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=soW_Gg4NElc | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Terhorst_SER-FIQ_Unsupervised_Estimation_of_Face_Image_Quality_Based_on_Stochastic_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Terhorst_SER-FIQ_Unsupervised_Estimation_of_Face_Image_Quality_Based_on_Stochastic_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Globally Optimal Contrast Maximisation for Event-Based Motion Estimation | Daqi Liu, Alvaro Parra, Tat-Jun Chin | Contrast maximisation estimates the motion captured in an event stream by maximising the sharpness of the motion-compensated event image. To carry out contrast maximisation, many previous works employ iterative optimisation algorithms, such as conjugate gradient, which require good initialisation to avoid converging to bad local minima. To alleviate this weakness, we propose a new globally optimal event-based motion estimation algorithm. Based on branch-and-bound (BnB), our method solves rotational (3DoF) motion estimation on event streams, which supports practical applications such as video stabilisation and attitude estimation. Underpinning our method are novel bounding functions for contrast maximisation, whose theoretical validity is rigorously established. We show concrete examples from public datasets where globally optimal solutions are vital to the success of contrast maximisation. Despite its exact nature, our algorithm is currently able to process a 50,000-event input in approx 300 seconds (a locally optimal solver takes approx 30 seconds on the same input), and has the potential to be further speeded-up using GPUs. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Globally_Optimal_Contrast_Maximisation_for_Event-Based_Motion_Estimation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.10686 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Globally_Optimal_Contrast_Maximisation_for_Event-Based_Motion_Estimation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Globally_Optimal_Contrast_Maximisation_for_Event-Based_Motion_Estimation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Liu_Globally_Optimal_Contrast_CVPR_2020_supplemental.pdf | null | null |
Towards High-Fidelity 3D Face Reconstruction From In-the-Wild Images Using Graph Convolutional Networks | Jiangke Lin, Yi Yuan, Tianjia Shao, Kun Zhou | 3D Morphable Model (3DMM) based methods have achieved great success in recovering 3D face shapes from single-view images. However, the facial textures recovered by such methods lack the fidelity as exhibited in the input images. Recent works demonstrate high-quality facial texture recovering with generative networks trained from a large-scale database of high-resolution UV maps of face textures, which is hard to prepare and not publicly available. In this paper, we introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in the wild, without the need to capture a large-scale face texture database. The main idea is to refine the initial texture generated by a 3DMM based method with facial details from the input image. To this end, we propose to use graph convolutional networks to reconstruct the detailed colors for the mesh vertices instead of reconstructing the UV map. Experiments show that our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lin_Towards_High-Fidelity_3D_Face_Reconstruction_From_In-the-Wild_Images_Using_Graph_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.05653 | https://www.youtube.com/watch?v=kAJhxISDr88 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Towards_High-Fidelity_3D_Face_Reconstruction_From_In-the-Wild_Images_Using_Graph_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Towards_High-Fidelity_3D_Face_Reconstruction_From_In-the-Wild_Images_Using_Graph_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
PolyTransform: Deep Polygon Transformer for Instance Segmentation | Justin Liang, Namdar Homayounfar, Wei-Chiu Ma, Yuwen Xiong, Rui Hu, Raquel Urtasun | In this paper, we propose PolyTransform, a novel instance segmentation algorithm that produces precise, geometry-preserving masks by combining the strengths of prevailing segmentation approaches and modern polygon-based methods. In particular, we first exploit a segmentation network to generate instance masks. We then convert the masks into a set of polygons that are then fed to a deforming network that transforms the polygons such that they better fit the object boundaries. Our experiments on the challenging Cityscapes dataset show that our PolyTransform significantly improves the performance of the backbone instance segmentation network and ranks 1st on the Cityscapes test-set leaderboard. We also show impressive gains in the interactive annotation setting. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liang_PolyTransform_Deep_Polygon_Transformer_for_Instance_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.02801 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liang_PolyTransform_Deep_Polygon_Transformer_for_Instance_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liang_PolyTransform_Deep_Polygon_Transformer_for_Instance_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation | Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, Olga Russakovsky | Computer vision models learn to perform a task by capturing relevant statistics from training data. It has been shown that models learn spurious age, gender, and race correlations when trained for seemingly unrelated tasks like activity recognition or image captioning. Various mitigation techniques have been presented to prevent models from utilizing or learning such biases. However, there has been little systematic comparison between these techniques. We design a simple but surprisingly effective visual recognition benchmark for studying bias mitigation. Using this benchmark, we provide a thorough analysis of a wide range of techniques. We highlight the shortcomings of popular adversarial training approaches for bias mitigation, propose a simple but similarly effective alternative to the inference-time Reducing Bias Amplification method of Zhao et al., and design a domain-independent training technique that outperforms all other methods. Finally, we validate our findings on the attribute classification task in the CelebA dataset, where attribute presence is known to be correlated with the gender of people in the image, and demonstrate that the proposed technique is effective at mitigating real-world gender bias. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Towards_Fairness_in_Visual_Recognition_Effective_Strategies_for_Bias_Mitigation_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.11834 | https://www.youtube.com/watch?v=_4TaQlaUN80 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Towards_Fairness_in_Visual_Recognition_Effective_Strategies_for_Bias_Mitigation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Towards_Fairness_in_Visual_Recognition_Effective_Strategies_for_Bias_Mitigation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
RDCFace: Radial Distortion Correction for Face Recognition | He Zhao, Xianghua Ying, Yongjie Shi, Xin Tong, Jingsi Wen, Hongbin Zha | The effects of radial lens distortion often appear in wide-angle cameras of surveillance and safeguard systems, which may severely degrade performances of previous face recognition algorithms. Traditional methods for radial lens distortion correction usually employ line features in scenarios that are not suitable for face images. In this paper, we propose a distortion-invariant face recognition system called RDCFace, which directly and only utilize the distorted images of faces, to alleviate the effects of radial lens distortion. RDCFace is an end-to-end trainable cascade network, which can learn rectification and alignment parameters to achieve a better face recognition performance without requiring supervision of facial landmarks and distortion parameters. We design sequential spatial transformer layers to optimize the correction, alignment, and recognition modules jointly. The feasibility of our method comes from implicitly using the statistics of the layout of face features learned from the large-scale face data. Extensive experiments indicate that our method is distortion robust and gains significant improvements on LFW, YTF, CFP, and RadialFace, a real distorted face benchmark compared with state-of-the-art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhao_RDCFace_Radial_Distortion_Correction_for_Face_Recognition_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhao_RDCFace_Radial_Distortion_Correction_for_Face_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhao_RDCFace_Radial_Distortion_Correction_for_Face_Recognition_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Learning Dynamic Routing for Semantic Segmentation | Yanwei Li, Lin Song, Yukang Chen, Zeming Li, Xiangyu Zhang, Xingang Wang, Jian Sun | Recently, numerous handcrafted and searched networks have been applied for semantic segmentation. However, previous works intend to handle inputs with various scales in pre-defined static architectures, such as FCN, U-Net, and DeepLab series. This paper studies a conceptually new method to alleviate the scale variance in semantic representation, named dynamic routing. The proposed framework generates data-dependent routes, adapting to the scale distribution of each image. To this end, a differentiable gating function, called soft conditional gate, is proposed to select scale transform paths on the fly. In addition, the computational cost can be further reduced in an end-to-end manner by giving budget constraints to the gating function. We further relax the network level routing space to support multi-path propagations and skip-connections in each forward, bringing substantial network capacity. To demonstrate the superiority of the dynamic property, we compare with several static architectures, which can be modeled as special cases in the routing space. Extensive experiments are conducted on Cityscapes and PASCAL VOC 2012 to illustrate the effectiveness of the dynamic framework. Code is available at https://github.com/yanwei-li/DynamicRouting. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Learning_Dynamic_Routing_for_Semantic_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.10401 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Learning_Dynamic_Routing_for_Semantic_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Learning_Dynamic_Routing_for_Semantic_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking With 2D-3D Multi-Feature Learning | Xinshuo Weng, Yongxin Wang, Yunze Man, Kris M. Kitani | 3D Multi-object tracking (MOT) is crucial to autonomous systems. Recent work uses a standard tracking-by-detection pipeline, where feature extraction is first performed independently for each object in order to compute an affinity matrix. Then the affinity matrix is passed to the Hungarian algorithm for data association. A key process of this standard pipeline is to learn discriminative features for different objects in order to reduce confusion during data association. In this work, we propose two techniques to improve the discriminative feature learning for MOT: (1) instead of obtaining features for each object independently, we propose a novel feature interaction mechanism by introducing the Graph Neural Network. As a result, the feature of one object is informed of the features of other objects so that the object feature can lean towards the object with similar feature (i.e., object probably with a same ID) and deviate from objects with dissimilar features (i.e., object probably with different IDs), leading to a more discriminative feature for each object; (2) instead of obtaining the feature from either 2D or 3D space in prior work, we propose a novel joint feature extractor to learn appearance and motion features from 2D and 3D space simultaneously. As features from different modalities often have complementary information, the joint feature can be more discriminate than feature from each individual modality. To ensure that the joint feature extractor does not heavily rely on one modality, we also propose an ensemble training paradigm. Through extensive evaluation, our proposed method achieves state-of-the-art performance on KITTI and nuScenes 3D MOT benchmarks. Our code will be made available at https://github.com/xinshuoweng/GNN3DMOT | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Weng_GNN3DMOT_Graph_Neural_Network_for_3D_Multi-Object_Tracking_With_2D-3D_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=WexHfLVMZQs | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Weng_GNN3DMOT_Graph_Neural_Network_for_3D_Multi-Object_Tracking_With_2D-3D_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Weng_GNN3DMOT_Graph_Neural_Network_for_3D_Multi-Object_Tracking_With_2D-3D_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Searching Central Difference Convolutional Networks for Face Anti-Spoofing | Zitong Yu, Chenxu Zhao, Zezheng Wang, Yunxiao Qin, Zhuo Su, Xiaobai Li, Feng Zhou, Guoying Zhao | Face anti-spoofing (FAS) plays a vital role in face recognition systems. Most state-of-the-art FAS methods 1) rely on stacked convolutions and expert-designed network, which is weak in describing detailed fine-grained information and easily being ineffective when the environment varies (e.g., different illumination), and 2) prefer to use long sequence as input to extract dynamic features, making them difficult to deploy into scenarios which need quick response. Here we propose a novel frame level FAS method based on Central Difference Convolution (CDC), which is able to capture intrinsic detailed patterns via aggregating both intensity and gradient information. A network built with CDC, called the Central Difference Convolutional Network (CDCN), is able to provide more robust modeling capacity than its counterpart built with vanilla convolution. Furthermore, over a specifically designed CDC search space, Neural Architecture Search (NAS) is utilized to discover a more powerful network structure (CDCN++), which can be assembled with Multiscale Attention Fusion Module (MAFM) for further boosting performance. Comprehensive experiments are performed on six benchmark datasets to show that 1) the proposed method not only achieves superior performance on intra-dataset testing (especially 0.2% ACER in Protocol-1 of OULU-NPU dataset), 2) it also generalizes well on cross-dataset testing (particularly 6.5% HTER from CASIA-MFSD to Replay-Attack datasets). The codes are available at https://github.com/ZitongYu/CDCN. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yu_Searching_Central_Difference_Convolutional_Networks_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.04092 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Searching_Central_Difference_Convolutional_Networks_for_Face_Anti-Spoofing_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Searching_Central_Difference_Convolutional_Networks_for_Face_Anti-Spoofing_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yu_Searching_Central_Difference_CVPR_2020_supplemental.pdf | null | null |
PREDICT & CLUSTER: Unsupervised Skeleton Based Action Recognition | Kun Su, Xiulong Liu, Eli Shlizerman | We propose a novel system for unsupervised skeleton-based action recognition. Given inputs of body-keypoints sequences obtained during various movements, our system associates the sequences with actions. Our system is based on an encoder-decoder recurrent neural network, where the encoder learns a separable feature representation within its hidden states formed by training the model to perform the prediction task. We show that according to such unsupervised training, the decoder and the encoder self-organize their hidden states into a feature space which clusters similar movements into the same cluster and distinct movements into distant clusters. Current state-of-the-art methods for action recognition are strongly supervised, i.e., rely on providing labels for training. Unsupervised methods have been proposed, however, they require camera and depth inputs (RGB+D) at each time step. In contrast, our system is fully unsupervised, does not require action labels at any stage and can operate with body-keypoints input only. Furthermore, the method can perform on various dimensions of body-keypoints (2D or 3D) and can include additional cues describing movements. We evaluate our system on three action recognition benchmarks with different numbers of actions and examples. Our results outperform prior unsupervised skeleton-based methods, unsupervised RGB+D based methods on cross-view tests and while being unsupervised have similar performance to supervised skeleton-based action recognition. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Su_PREDICT__CLUSTER_Unsupervised_Skeleton_Based_Action_Recognition_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.12409 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Su_PREDICT__CLUSTER_Unsupervised_Skeleton_Based_Action_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Su_PREDICT__CLUSTER_Unsupervised_Skeleton_Based_Action_Recognition_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Su_PREDICT__CLUSTER_CVPR_2020_supplemental.zip | null | null |
RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild | Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kotsia, Stefanos Zafeiriou | Though tremendous strides have been made in uncontrolled face detection, accurate and efficient 2D face alignment and 3D face reconstruction in-the-wild remain an open challenge. In this paper, we present a novel single-shot, multi-level face localisation method, named RetinaFace, which unifies face box prediction, 2D facial landmark localisation and 3D vertices regression under one common target: point regression on the image plane. To fill the data gap, we manually annotated five facial landmarks on the WIDER FACE dataset and employed a semi-automatic annotation pipeline to generate 3D vertices for face images from the WIDER FACE, AFLW and FDDB datasets. Based on extra annotations, we propose a mutually beneficial regression target for 3D face reconstruction, that is predicting 3D vertices projected on the image plane constrained by a common 3D topology. The proposed 3D face reconstruction branch can be easily incorporated, without any optimisation difficulty, in parallel with the existing box and 2D landmark regression branches during joint training. Extensive experimental results show that RetinaFace can simultaneously achieve stable face detection, accurate 2D face alignment and robust 3D face reconstruction while being efficient through single-shot inference. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Deng_RetinaFace_Single-Shot_Multi-Level_Face_Localisation_in_the_Wild_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Deng_RetinaFace_Single-Shot_Multi-Level_Face_Localisation_in_the_Wild_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Deng_RetinaFace_Single-Shot_Multi-Level_Face_Localisation_in_the_Wild_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Deng_RetinaFace_Single-Shot_Multi-Level_CVPR_2020_supplemental.pdf | null | null |
Monocular Real-Time Hand Shape and Motion Capture Using Multi-Modal Data | Yuxiao Zhou, Marc Habermann, Weipeng Xu, Ikhsanul Habibie, Christian Theobalt, Feng Xu | We present a novel method for monocular hand shape and pose estimation at unprecedented runtime performance of 100fps and at state-of-the-art accuracy. This is enabled by a new learning based architecture designed such that it can make use of all the sources of available hand training data: image data with either 2D or 3D annotations, as well as stand-alone 3D animations without corresponding image data. It features a 3D hand joint detection module and an inverse kinematics module which regresses not only 3D joint positions but also maps them to joint rotations in a single feed-forward pass. This output makes the method more directly usable for applications in computer vision and graphics compared to only regressing 3D joint positions. We demonstrate that our architectural design leads to a significant quantitative and qualitative improvement over the state of the art on several challenging benchmarks. We will make our code publicly available for future research. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhou_Monocular_Real-Time_Hand_Shape_and_Motion_Capture_Using_Multi-Modal_Data_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.09572 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Monocular_Real-Time_Hand_Shape_and_Motion_Capture_Using_Multi-Modal_Data_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Monocular_Real-Time_Hand_Shape_and_Motion_Capture_Using_Multi-Modal_Data_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhou_Monocular_Real-Time_Hand_CVPR_2020_supplemental.pdf | null | null |
Mitigating Bias in Face Recognition Using Skewness-Aware Reinforcement Learning | Mei Wang, Weihong Deng | Racial equality is an important theme of international human rights law, but it has been largely obscured when the overall face recognition accuracy is pursued blindly. More facts indicate racial bias indeed degrades the fairness of recognition system and the error rates on non-Caucasians are usually much higher than Caucasians. To encourage fairness, we introduce the idea of adaptive margin to learn balanced performance for different races based on large margin losses. A reinforcement learning based race balance network (RL-RBN) is proposed. We formulate the process of finding the optimal margins for non-Caucasians as a Markov decision process and employ deep Q-learning to learn policies for an agent to select appropriate margin by approximating the Q-value function. Guided by the agent, the skewness of feature scatter between races can be reduced. Besides, we provide two ethnicity aware training datasets, called BUPT-Globalface and BUPT-Balancedface dataset, which can be utilized to study racial bias from both data and algorithm aspects. Extensive experiments on RFW database show that RL-RBN successfully mitigates racial bias and learns more balanced performance. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Mitigating_Bias_in_Face_Recognition_Using_Skewness-Aware_Reinforcement_Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.10692 | https://www.youtube.com/watch?v=nttSIxTGH7g | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Mitigating_Bias_in_Face_Recognition_Using_Skewness-Aware_Reinforcement_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Mitigating_Bias_in_Face_Recognition_Using_Skewness-Aware_Reinforcement_Learning_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Single Image Reflection Removal With Physically-Based Training Images | Soomin Kim, Yuchi Huo, Sung-Eui Yoon | Recently, deep learning-based single image reflection separation methods have been exploited widely. To benefit the learning approach, a large number of training image pairs (i.e., with and without reflections) were synthesized in various ways, yet they are away from a physically-based direction. In this paper, physically based rendering is used for faithfully synthesizing the required training images, and a corresponding network structure and loss term are proposed. We utilize existing RGBD/RGB images to estimate meshes, then physically simulate the light transportation between meshes, glass, and lens with path tracing to synthesize training data, which successfully reproduce the spatially variant anisotropic visual effect of glass reflection. For guiding the separation better, we additionally consider a module, backtrack network (BT-net) for backtracking the reflections, which removes complicated ghosting, attenuation, blurred and defocused effect of glass/lens. This enables obtaining a priori information before having the distortion. The proposed method considering additional a priori information with physically simulated training data is validated with various real reflection images and shows visually pleasant and numerical advantages compared with state-of-the-art techniques. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kim_Single_Image_Reflection_Removal_With_Physically-Based_Training_Images_CVPR_2020_paper.pdf | http://arxiv.org/abs/1904.11934 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kim_Single_Image_Reflection_Removal_With_Physically-Based_Training_Images_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kim_Single_Image_Reflection_Removal_With_Physically-Based_Training_Images_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Disentangled Image Generation Through Structured Noise Injection | Yazeed Alharbi, Peter Wonka | We explore different design choices for injecting noise into generative adversarial networks (GANs) with the goal of disentangling the latent space. Instead of traditional approaches, we propose feeding multiple noise codes through separate fully-connected layers respectively. The aim is restricting the influence of each noise code to specific parts of the generated image. We show that disentanglement in the first layer of the generator network leads to disentanglement in the generated image. Through a grid-based structure, we achieve several aspects of disentanglement without complicating the network architecture and without requiring labels. We achieve spatial disentanglement, scale-space disentanglement, and disentanglement of the foreground object from the background style allowing fine-grained control over the generated images. Examples include changing facial expressions in face images, changing beak length in bird images, and changing car dimensions in car images. This empirically leads to better disentanglement scores than state-of-the-art methods on the FFHQ dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Alharbi_Disentangled_Image_Generation_Through_Structured_Noise_Injection_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.12411 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Alharbi_Disentangled_Image_Generation_Through_Structured_Noise_Injection_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Alharbi_Disentangled_Image_Generation_Through_Structured_Noise_Injection_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Deep 3D Capture: Geometry and Reflectance From Sparse Multi-View Images | Sai Bi, Zexiang Xu, Kalyan Sunkavalli, David Kriegman, Ravi Ramamoorthi | We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object from a sparse set of only six images captured by wide-baseline cameras under collocated point lighting. We first estimate per-view depth maps using a deep multi-view stereo network; these depth maps are used to coarsely align the different views. We propose a novel multi-view reflectance estimation network architecture that is trained to pool features from these coarsely aligned images and predict per-view spatially-varying diffuse albedo, surface normals, specular roughness and specular albedo. We do this by jointly optimizing the latent space of our multi-view reflectance network to minimize the photometric error between images rendered with our predictions and the input images. While previous state-of-the-art methods fail on such sparse acquisition setups, we demonstrate, via extensive experiments on synthetic and real data, that our method produces high-quality reconstructions that can be used to render photorealistic images. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Bi_Deep_3D_Capture_Geometry_and_Reflectance_From_Sparse_Multi-View_Images_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.12642 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Bi_Deep_3D_Capture_Geometry_and_Reflectance_From_Sparse_Multi-View_Images_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Bi_Deep_3D_Capture_Geometry_and_Reflectance_From_Sparse_Multi-View_Images_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Bi_Deep_3D_Capture_CVPR_2020_supplemental.pdf | null | null |
Multi-Scale Fusion Subspace Clustering Using Similarity Constraint | Zhiyuan Dang, Cheng Deng, Xu Yang, Heng Huang | Classical subspace clustering methods often assume that the raw form data lie in a union of the low-dimension linear subspace. This assumption is too strict in practice, which largely limits the generalization of subspace clustering. To tackle this issue, deep subspace clustering (DSC) networks based on deep autoencoder (DAE) have been proposed, which non-linearly map the raw form data into a latent space well-adapted to subspace clustering. However, existing DSC models ignore the important multi-scale information embedded in DAE, thus abandon the much more useful deep features, leading their suboptimal clustering results. In this paper, we propose the Multi-Scale Fusion Subspace Clustering Using Similarity Constraint (SC-MSFSC) network, which learns a more discriminative self-expression coefficient matrix by a novel multi-scale fusion module. More importantly, it introduces a similarity constraint module to guide the fused self-expression coefficient matrix in training. Specifically, the multi-scale fusion module is framed to generate the self-expression coefficient matrix of each convolutional layer in DAE and then fuses them with the convolutional kernel. In addition, the similarity constraint module is to supervise the fused self-expression coefficient matrix by the designed similarity matrix. Extensive experimental results on four benchmark datasets demonstrate the superiority of our new model against state-of-the-art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Dang_Multi-Scale_Fusion_Subspace_Clustering_Using_Similarity_Constraint_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Dang_Multi-Scale_Fusion_Subspace_Clustering_Using_Similarity_Constraint_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Dang_Multi-Scale_Fusion_Subspace_Clustering_Using_Similarity_Constraint_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
GroupFace: Learning Latent Groups and Constructing Group-Based Representations for Face Recognition | Yonghyun Kim, Wonpyo Park, Myung-Cheol Roh, Jongju Shin | In the field of face recognition, a model learns to distinguish millions of face images with fewer dimensional embedding features, and such vast information may not be properly encoded in the conventional model with a single branch. We propose a novel face-recognition-specialized architecture called GroupFace that utilizes multiple group-aware representations, simultaneously, to improve the quality of the embedding feature. The proposed method provides self-distributed labels that balance the number of samples belonging to each group without additional human annotations, and learns the group-aware representations that can narrow down the search space of the target identity. We prove the effectiveness of the proposed method by showing extensive ablation studies and visualizations. All the components of the proposed method can be trained in an end-to-end manner with a marginal increase of computational complexity. Finally, the proposed method achieves the state-of-the-art results with significant improvements in 1:1 face verification and 1:N face identification tasks on the following public datasets: LFW, YTF, CALFW, CPLFW, CFP, AgeDB-30, MegaFace, IJB-B and IJB-C. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kim_GroupFace_Learning_Latent_Groups_and_Constructing_Group-Based_Representations_for_Face_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.10497 | https://www.youtube.com/watch?v=3QHgDucMUGs | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kim_GroupFace_Learning_Latent_Groups_and_Constructing_Group-Based_Representations_for_Face_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kim_GroupFace_Learning_Latent_Groups_and_Constructing_Group-Based_Representations_for_Face_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Learning to Optimize Non-Rigid Tracking | Yang Li, Aljaz Bozic, Tianwei Zhang, Yanli Ji, Tatsuya Harada, Matthias Niessner | One of the widespread solutions for non-rigid tracking has a nested-loop structure: with Gauss-Newton to minimize a tracking objective in the outer loop, and Preconditioned Conjugate Gradient (PCG) to solve a sparse linear system in the inner loop. In this paper, we employ learnable optimizations to improve tracking robustness and speed up solver convergence. First, we upgrade the tracking objective by integrating an alignment data term on deep features which are learned end-to-end through CNN. The new tracking objective can capture the global deformation which helps Gauss-Newton to jump over local minimum, leading to robust tracking on large non-rigid motions. Second, we bridge the gap between the preconditioning technique and learning method by introducing a ConditionNet which is trained to generate a preconditioner such that PCG can converge within a small number of steps. Experimental results indicate that the proposed learning method converges faster than the original PCG by a large margin. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Learning_to_Optimize_Non-Rigid_Tracking_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.12230 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Learning_to_Optimize_Non-Rigid_Tracking_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Learning_to_Optimize_Non-Rigid_Tracking_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Learning_to_Optimize_CVPR_2020_supplemental.pdf | null | null |
Weakly Supervised Discriminative Feature Learning With State Information for Person Identification | Hong-Xing Yu, Wei-Shi Zheng | Unsupervised learning of identity-discriminative visual feature is appealing in real-world tasks where manual labelling is costly. However, the images of an identity can be visually discrepant when images are taken under different states, e.g. different camera views and poses. This visual discrepancy leads to great difficulty in unsupervised discriminative learning. Fortunately, in real-world tasks we could often know the states without human annotation, e.g. we can easily have the camera view labels in person re-identification and facial pose labels in face recognition. In this work we propose utilizing the state information as weak supervision to address the visual discrepancy caused by different states. We formulate a simple pseudo label model and utilize the state information in an attempt to refine the assigned pseudo labels by the weakly supervised decision boundary rectification and weakly supervised feature drift regularization. We evaluate our model on unsupervised person re-identification and pose-invariant face recognition. Despite the simplicity of our method, it could outperform the state-of-the-art results on Duke-reID, MultiPIE and CFP datasets with a standard ResNet-50 backbone. We also find our model could perform comparably with the standard supervised fine-tuning results on the three datasets. Code is available at https://github.com/KovenYu/state-information. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yu_Weakly_Supervised_Discriminative_Feature_Learning_With_State_Information_for_Person_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.11939 | https://www.youtube.com/watch?v=t-FKDBluKX8 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Weakly_Supervised_Discriminative_Feature_Learning_With_State_Information_for_Person_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Weakly_Supervised_Discriminative_Feature_Learning_With_State_Information_for_Person_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yu_Weakly_Supervised_Discriminative_CVPR_2020_supplemental.pdf | null | null |
An Internal Covariate Shift Bounding Algorithm for Deep Neural Networks by Unitizing Layers' Outputs | You Huang, Yuanlong Yu | Batch Normalization (BN) techniques have been proposed to reduce the so-called Internal Covariate Shift (ICS) by attempting to keep the distributions of layer outputs unchanged. Experiments have shown their effectiveness on training deep neural networks. However, since only the first two moments are controlled in these BN techniques, it seems that a weak constraint is imposed on layer distributions and furthermore whether such constraint can reduce ICS is unknown. Thus this paper proposes a measure for ICS by using the Earth Mover (EM) distance and then derives the upper and lower bounds for the measure to provide a theoretical analysis of BN. The upper bound has shown that BN techniques can control ICS only for the outputs with low dimensions and small noise whereas their control is not effective in other cases. This paper also proves that such control is just a bounding of ICS rather than a reduction of ICS. Meanwhile, the analysis shows that the high-order moments and noise, which BN cannot control, have great impact on the lower bound. Based on such analysis, this paper furthermore proposes an algorithm that unitizes the outputs with an adjustable parameter to further bound ICS in order to cope with the problems of BN. The upper bound for the proposed unitization is noise-free and only dominated by the parameter. Thus, the parameter can be trained to tune the bound and further to control ICS. Besides, the unitization is embedded into the framework of BN to reduce the information loss. The experiments show that this proposed algorithm outperforms existing BN techniques on CIFAR-10, CIFAR-100 and ImageNet datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Huang_An_Internal_Covariate_Shift_Bounding_Algorithm_for_Deep_Neural_Networks_CVPR_2020_paper.pdf | http://arxiv.org/abs/2001.02814 | https://www.youtube.com/watch?v=TgvzANZU-I4 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_An_Internal_Covariate_Shift_Bounding_Algorithm_for_Deep_Neural_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_An_Internal_Covariate_Shift_Bounding_Algorithm_for_Deep_Neural_Networks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Huang_An_Internal_Covariate_CVPR_2020_supplemental.pdf | null | null |
MixNMatch: Multifactor Disentanglement and Encoding for Conditional Image Generation | Yuheng Li, Krishna Kumar Singh, Utkarsh Ojha, Yong Jae Lee | We present MixNMatch, a conditional generative model that learns to disentangle and encode background, object pose, shape, and texture from real images with minimal supervision, for mix-and-match image generation. We build upon FineGAN, an unconditional generative model, to learn the desired disentanglement and image generator, and leverage adversarial joint image-code distribution matching to learn the latent factor encoders. MixNMatch requires bounding boxes during training to model background, but requires no other supervision. Through extensive experiments, we demonstrate MixNMatch's ability to accurately disentangle, encode, and combine multiple factors for mix-and-match image generation, including sketch2color, cartoon2img, and img2gif applications. Our code/models/demo can be found at https://github.com/Yuheng-Li/MixNMatch | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_MixNMatch_Multifactor_Disentanglement_and_Encoding_for_Conditional_Image_Generation_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.11758 | https://www.youtube.com/watch?v=s0rmHppyUyY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_MixNMatch_Multifactor_Disentanglement_and_Encoding_for_Conditional_Image_Generation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_MixNMatch_Multifactor_Disentanglement_and_Encoding_for_Conditional_Image_Generation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_MixNMatch_Multifactor_Disentanglement_CVPR_2020_supplemental.zip | null | null |
Parsing-Based View-Aware Embedding Network for Vehicle Re-Identification | Dechao Meng, Liang Li, Xuejing Liu, Yadong Li, Shijie Yang, Zheng-Jun Zha, Xingyu Gao, Shuhui Wang, Qingming Huang | Vehicle Re-Identification is to find images of the same vehicle from various views in the cross-camera scenario. The main challenges of this task are the large intra-instance distance caused by different views and the subtle inter-instance discrepancy caused by similar vehicles. In this paper, we propose a parsing-based view-aware embedding network (PVEN) to achieve the view-aware feature alignment and enhancement for vehicle ReID. First, we introduce a parsing network to parse a vehicle into four different views and then align the features by mask average pooling. Such alignment provides a fine-grained representation of the vehicle. Second, in order to enhance the view-aware features, we design a common-visible attention to focus on the common visible views, which not only shortens the distance among intra-instances, but also enlarges the discrepancy of inter-instances. The PVEN helps capture the stable discriminative information of vehicle under different views. The experiments conducted on three datasets show that our model outperforms state-of-the-art methods by a large margin. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Meng_Parsing-Based_View-Aware_Embedding_Network_for_Vehicle_Re-Identification_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.05021 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Meng_Parsing-Based_View-Aware_Embedding_Network_for_Vehicle_Re-Identification_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Meng_Parsing-Based_View-Aware_Embedding_Network_for_Vehicle_Re-Identification_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
PF-Net: Point Fractal Network for 3D Point Cloud Completion | Zitian Huang, Yikuan Yu, Jiawen Xu, Feng Ni, Xinyi Le | In this paper, we propose a Point Fractal Network (PF-Net), a novel learning-based approach for precise and high-fidelity point cloud completion. Unlike existing point cloud completion networks, which generate the overall shape of the point cloud from the incomplete point cloud and always change existing points and encounter noise and geometrical loss, PF-Net preserves the spatial arrangements of the incomplete point cloud and can figure out the detailed geometrical structure of the missing region(s) in the prediction. To succeed at this task, PF-Net estimates the missing point cloud hierarchically by utilizing a feature-points-based multi-scale generating network. Further, we add up multi-stage completion loss and adversarial loss to generate more realistic missing region(s). The adversarial loss can better tackle multiple modes in the prediction. Our experiments demonstrate the effectiveness of our method for several challenging point cloud completion tasks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Huang_PF-Net_Point_Fractal_Network_for_3D_Point_Cloud_Completion_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_PF-Net_Point_Fractal_Network_for_3D_Point_Cloud_Completion_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_PF-Net_Point_Fractal_Network_for_3D_Point_Cloud_Completion_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Stochastic Classifiers for Unsupervised Domain Adaptation | Zhihe Lu, Yongxin Yang, Xiatian Zhu, Cong Liu, Yi-Zhe Song, Tao Xiang | A common strategy adopted by existing state-of-the-art unsupervised domain adaptation (UDA) methods is to employ two classifiers to identify the misaligned local regions between source and target domain. Following the 'wisdom of the crowd' principle, one has to ask: why stop at two? Indeed, we find that using more classifiers leads to better performance, but also introduces more model parameters, therefore risking overfitting. In this paper, we introduce a novel method called STochastic clAssifieRs (STAR) for addressing this problem. Instead of representing one classifier as a weight vector, STAR models it as a Gaussian distribution with its variance representing the inter-classifier discrepancy. With STAR, we can now sample an arbitrary number of classifiers from the distribution, whilst keeping the model size the same as having two classifiers. Extensive experiments demonstrate that a variety of existing UDA methods can greatly benefit from STAR and achieve the state-of-the-art performance on both image classification and semantic segmentation tasks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lu_Stochastic_Classifiers_for_Unsupervised_Domain_Adaptation_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Stochastic_Classifiers_for_Unsupervised_Domain_Adaptation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Stochastic_Classifiers_for_Unsupervised_Domain_Adaptation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
CIAGAN: Conditional Identity Anonymization Generative Adversarial Networks | Maxim Maximov, Ismail Elezi, Laura Leal-Taixe | The unprecedented increase in the usage of computer vision technology in society goes hand in hand with an increased concern in data privacy. In many real-world scenarios like people tracking or action recognition, it is important to be able to process the data while taking careful consideration in protecting people's identity. We propose and develop CIAGAN, a model for image and video anonymization based on conditional generative adversarial networks. Our model is able to remove the identifying characteristics of faces and bodies while producing high-quality images and videos that can be used for any computer vision task, such as detection or tracking. Unlike previous methods, we have full control over the de-identification (anonymization) procedure, ensuring both anonymization as well as diversity. We compare our method to several baselines and achieve state-of-the-art results. To facilitate further research, we make available the code and the models at https://github.com/dvl-tum/ciagan. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Maximov_CIAGAN_Conditional_Identity_Anonymization_Generative_Adversarial_Networks_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Maximov_CIAGAN_Conditional_Identity_Anonymization_Generative_Adversarial_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Maximov_CIAGAN_Conditional_Identity_Anonymization_Generative_Adversarial_Networks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Maximov_CIAGAN_Conditional_Identity_CVPR_2020_supplemental.pdf | null | null |
Hierarchically Robust Representation Learning | Qi Qian, Juhua Hu, Hao Li | With the tremendous success of deep learning in visual tasks, the representations extracted from intermediate layers of learned models, that is, deep features, attract much attention of researchers. Previous empirical analysis shows that those features can contain appropriate semantic information. Therefore, with a model trained on a large-scale benchmark data set (e.g., ImageNet), the extracted features can work well on other tasks. In this work, we investigate this phenomenon and demonstrate that deep features can be suboptimal due to the fact that they are learned by minimizing the empirical risk. When the data distribution of the target task is different from that of the benchmark data set, the performance of deep features can degrade. Hence, we propose a hierarchically robust optimization method to learn more generic features. Considering the example-level and concept-level robustness simultaneously, we formulate the problem as a distributionally robust optimization problem with Wasserstein ambiguity set constraints, and an efficient algorithm with the conventional training pipeline is proposed. Experiments on benchmark data sets demonstrate the effectiveness of the robust deep representations. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Qian_Hierarchically_Robust_Representation_Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.04047 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Qian_Hierarchically_Robust_Representation_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Qian_Hierarchically_Robust_Representation_Learning_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Qian_Hierarchically_Robust_Representation_CVPR_2020_supplemental.pdf | null | null |
Towards Robust Image Classification Using Sequential Attention Models | Daniel Zoran, Mike Chrzanowski, Po-Sen Huang, Sven Gowal, Alex Mott, Pushmeet Kohli | In this paper we propose to augment a modern neural-network architecture with an attention model inspired by human perception. Specifically, we adversarially train and analyze a neural model incorporating a human inspired, visual attention component that is guided by a recurrent top-down sequential process. Our experimental evaluation uncovers several notable findings about the robustness and behavior of this new model. First, introducing attention to the model significantly improves adversarial robustness resulting in state-of-the-art ImageNet accuracies under a wide range of random targeted attack strengths. Second, we show that by varying the number of attention steps (glances/fixations) for which the model is unrolled, we are able to make its defense capabilities stronger, even in light of stronger attacks --- resulting in a "computational race" between the attacker and the defender. Finally, we show that some of the adversarial examples generated by attacking our model are quite different from conventional adversarial examples --- they contain global, salient and spatially coherent structures coming from the target class that would be recognizable even to a human, and work by distracting the attention of the model away from the main object in the original image. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zoran_Towards_Robust_Image_Classification_Using_Sequential_Attention_Models_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.02184 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zoran_Towards_Robust_Image_Classification_Using_Sequential_Attention_Models_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zoran_Towards_Robust_Image_Classification_Using_Sequential_Attention_Models_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zoran_Towards_Robust_Image_CVPR_2020_supplemental.pdf | null | null |
A Morphable Face Albedo Model | William A. P. Smith, Alassane Seck, Hannah Dee, Bernard Tiddeman, Joshua B. Tenenbaum, Bernhard Egger | In this paper, we bring together two divergent strands of research: photometric face capture and statistical 3D face appearance modelling. We propose a novel lightstage capture and processing pipeline for acquiring ear-to-ear, truly intrinsic diffuse and specular albedo maps that fully factor out the effects of illumination, camera and geometry. Using this pipeline, we capture a dataset of 50 scans and combine them with the only existing publicly available albedo dataset (3DRFE) of 23 scans. This allows us to build the first morphable face albedo model. We believe this is the first statistical analysis of the variability of facial specular albedo maps. This model can be used as a plug in replacement for the texture model of the Basel Face Model and we make our new albedo model publicly available. We ensure careful spectral calibration such that our model is built in a linear sRGB space, suitable for inverse rendering of images taken by typical cameras. We demonstrate our model in a state of the art analysis-by-synthesis 3DMM fitting pipeline, are the first to integrate specular map estimation and outperform the Basel Face Model in albedo reconstruction. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Smith_A_Morphable_Face_Albedo_Model_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.02711 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Smith_A_Morphable_Face_Albedo_Model_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Smith_A_Morphable_Face_Albedo_Model_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Smith_A_Morphable_Face_CVPR_2020_supplemental.pdf | null | null |
Fast Video Object Segmentation With Temporal Aggregation Network and Dynamic Template Matching | Xuhua Huang, Jiarui Xu, Yu-Wing Tai, Chi-Keung Tang | Significant progress has been made in Video Object Segmentation (VOS), the video object tracking task in its finest level. While the VOS task can be naturally decoupled into image semantic segmentation and video object tracking, significantly much more research effort has been made in segmentation than tracking. In this paper, we introduce "tracking-by-detection" into VOS which can coherently integrates segmentation into tracking, by proposing a new temporal aggregation network and a novel dynamic time-evolving template matching mechanism to achieve significantly improved performance. Notably, our method is entirely online and thus suitable for one-shot learning, and our end-to-end trainable model allows multiple object segmentation in one forward pass. We achieve new state-of-the-art performance on the DAVIS benchmark without complicated bells and whistles in both speed and accuracy, with a speed of 0.14 second per frame and J &F measure of 75.9% respectively. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Huang_Fast_Video_Object_Segmentation_With_Temporal_Aggregation_Network_and_Dynamic_CVPR_2020_paper.pdf | http://arxiv.org/abs/2007.05687 | https://www.youtube.com/watch?v=z55-layMCpY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_Fast_Video_Object_Segmentation_With_Temporal_Aggregation_Network_and_Dynamic_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_Fast_Video_Object_Segmentation_With_Temporal_Aggregation_Network_and_Dynamic_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Affinity Graph Supervision for Visual Recognition | Chu Wang, Babak Samari, Vladimir G. Kim, Siddhartha Chaudhuri, Kaleem Siddiqi | Affinity graphs are widely used in deep architectures, including graph convolutional neural networks and attention networks. Thus far, the literature has focused on abstracting features from such graphs, while the learning of the affinities themselves has been overlooked. Here we propose a principled method to directly supervise the learning of weights in affinity graphs, to exploit meaningful connections between entities in the data source. Applied to a visual attention network, our affinity supervision improves relationship recovery between objects, even without the use of manually annotated relationship labels. We further show that affinity learning between objects boosts scene categorization performance and that the supervision of affinity can also be applied to graphs built from mini-batches, for neural network training. In an image classification task we demonstrate consistent improvement over the baseline, with diverse network architectures and datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Affinity_Graph_Supervision_for_Visual_Recognition_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.09049 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Affinity_Graph_Supervision_for_Visual_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Affinity_Graph_Supervision_for_Visual_Recognition_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Distilling Effective Supervision From Severe Label Noise | Zizhao Zhang, Han Zhang, Sercan O. Arik, Honglak Lee, Tomas Pfister | Collecting large-scale data with clean labels for supervised training of neural networks is practically challenging. Although noisy labels are usually cheap to acquire, existing methods suffer a lot from label noise. This paper targets at the challenge of robust training at high label noise regimes. The key insight to achieve this goal is to wisely leverage a small trusted set to estimate exemplar weights and pseudo labels for noisy data in order to reuse them for supervised training. We present a holistic framework to train deep neural networks in a way that is highly invulnerable to label noise. Our method sets the new state of the art on various types of label noise and achieves excellent performance on large-scale datasets with real-world label noise. For instance, on CIFAR100 with a 40% uniform noise ratio and only 10 trusted labeled data per class, our method achieves 80.2% classification accuracy, where the error rate is only 1.4% higher than a neural network trained without label noise. Moreover, increasing the noise ratio to 80%, our method still maintains a high accuracy of 75.5%, compared to the previous best accuracy 48.2%. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Distilling_Effective_Supervision_From_Severe_Label_Noise_CVPR_2020_paper.pdf | http://arxiv.org/abs/1910.00701 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Distilling_Effective_Supervision_From_Severe_Label_Noise_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Distilling_Effective_Supervision_From_Severe_Label_Noise_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Temporally Distributed Networks for Fast Video Semantic Segmentation | Ping Hu, Fabian Caba, Oliver Wang, Zhe Lin, Stan Sclaroff, Federico Perazzi | We present TDNet, a temporally distributed network designed for fast and accurate video semantic segmentation. We observe that features extracted from a certain high-level layer of a deep CNN can be approximated by composing features extracted from several shallower sub-networks. Leveraging the inherent temporal continuity in videos, we distribute these sub-networks over sequential frames. Therefore, at each time step, we only need to perform a lightweight computation to extract a sub-features group from a single sub-network. The full features used for segmentation are then recomposed by application of a novel attention propagation module that compensates for geometry deformation between frames. A grouped knowledge distillation loss is also introduced to further improve the representation power at both full and sub-feature levels. Experiments on Cityscapes, CamVid, and NYUD-v2 demonstrate that our method achieves state-of-the-art accuracy with significantly faster speed and lower latency. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hu_Temporally_Distributed_Networks_for_Fast_Video_Semantic_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.01800 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Hu_Temporally_Distributed_Networks_for_Fast_Video_Semantic_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Hu_Temporally_Distributed_Networks_for_Fast_Video_Semantic_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Noise Robust Generative Adversarial Networks | Takuhiro Kaneko, Tatsuya Harada | Generative adversarial networks (GANs) are neural networks that learn data distributions through adversarial training. In intensive studies, recent GANs have shown promising results for reproducing training images. However, in spite of noise, they reproduce images with fidelity. As an alternative, we propose a novel family of GANs called noise robust GANs (NR-GANs), which can learn a clean image generator even when training images are noisy. In particular, NR-GANs can solve this problem without having complete noise information (e.g., the noise distribution type, noise amount, or signal-noise relationship). To achieve this, we introduce a noise generator and train it along with a clean image generator. However, without any constraints, there is no incentive to generate an image and noise separately. Therefore, we propose distribution and transformation constraints that encourage the noise generator to capture only the noise-specific components. In particular, considering such constraints under different assumptions, we devise two variants of NR-GANs for signal-independent noise and three variants of NR-GANs for signal-dependent noise. On three benchmark datasets, we demonstrate the effectiveness of NR-GANs in noise robust image generation. Furthermore, we show the applicability of NR-GANs in image denoising. Our code is available at https://github.com/takuhirok/NR-GAN/. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kaneko_Noise_Robust_Generative_Adversarial_Networks_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kaneko_Noise_Robust_Generative_Adversarial_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kaneko_Noise_Robust_Generative_Adversarial_Networks_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
DeepDeform: Learning Non-Rigid RGB-D Reconstruction With Semi-Supervised Data | Aljaz Bozic, Michael Zollhofer, Christian Theobalt, Matthias Niessner | Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus. Unfortunately, this method fails for important cases such as highly non-rigid deformations. We first address this problem of lack of data by introducing a novel semi-supervised strategy to obtain dense inter-frame correspondences from a sparse set of annotations. This way, we obtain a large dataset of 400 scenes, over 390,000 RGB-D frames, and 5,533 densely aligned frame pairs; in addition, we provide a test set along with several metrics for evaluation. Based on this corpus, we introduce a data-driven non-rigid feature matching approach, which we integrate into an optimization-based reconstruction pipeline. Here, we propose a new neural network that operates on RGB-D frames, while maintaining robustness under large non-rigid deformations and producing accurate predictions. Our approach significantly outperforms existing non-rigid reconstruction methods that do not use learned data terms, as well as learning-based approaches that only use self-supervision. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Bozic_DeepDeform_Learning_Non-Rigid_RGB-D_Reconstruction_With_Semi-Supervised_Data_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=aoCTS0-kYZc | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Bozic_DeepDeform_Learning_Non-Rigid_RGB-D_Reconstruction_With_Semi-Supervised_Data_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Bozic_DeepDeform_Learning_Non-Rigid_RGB-D_Reconstruction_With_Semi-Supervised_Data_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Bozic_DeepDeform_Learning_Non-Rigid_CVPR_2020_supplemental.pdf | null | null |
Learning Video Stabilization Using Optical Flow | Jiyang Yu, Ravi Ramamoorthi | We propose a novel neural network that infers the per-pixel warp fields for video stabilization from the optical flow fields of the input video. While previous learning based video stabilization methods attempt to implicitly learn frame motions from color videos, our method resorts to optical flow for motion analysis and directly learns the stabilization using the optical flow. We also propose a pipeline that uses optical flow principal components for motion inpainting and warp field smoothing, making our method robust to moving objects, occlusion and optical flow inaccuracy, which is challenging for other video stabilization methods. Our method achieves quantitatively and visually better results than the state-of-the-art optimization based and deep learning based video stabilization methods. Our method also gives a 3x speed improvement compared to the optimization based methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yu_Learning_Video_Stabilization_Using_Optical_Flow_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Learning_Video_Stabilization_Using_Optical_Flow_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Learning_Video_Stabilization_Using_Optical_Flow_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yu_Learning_Video_Stabilization_CVPR_2020_supplemental.zip | null | null |
Breaking the Cycle - Colleagues Are All You Need | Ori Nizan, Ayellet Tal | This paper proposes a novel approach to performing image-to-image translation between unpaired domains. Rather than relying on a cycle constraint, our method takes advantage of collaboration between various GANs. This results in a multi modal method, in which multiple optional and diverse images are produced for a given image. Our model addresses some of the shortcomings of classical GANs: (1) It is able to remove large objects, such as glasses. (2) Since it does not need to support the cycle constraint, no irrelevant traces of the input are left on the generated image. (3) It manages to translate between domains that require large shape modifications. Our results are shown to outperform those generated by state-of-the-art methods for several challenging applications on commonly-used datasets, both qualitatively and quantitatively. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Nizan_Breaking_the_Cycle_-_Colleagues_Are_All_You_Need_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Nizan_Breaking_the_Cycle_-_Colleagues_Are_All_You_Need_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Nizan_Breaking_the_Cycle_-_Colleagues_Are_All_You_Need_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Nizan_Breaking_the_Cycle_CVPR_2020_supplemental.zip | null | null |
Circle Loss: A Unified Perspective of Pair Similarity Optimization | Yifan Sun, Changmao Cheng, Yuhan Zhang, Chi Zhang, Liang Zheng, Zhongdao Wang, Yichen Wei | This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity s_p and minimize the between-class similarity s_n. We find a majority of loss functions, including the triplet loss and the softmax cross-entropy loss, embed s_n and s_p into similarity pairs and seek to reduce (s_n-s_p). Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning paradigms, ph i.e. , learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing (s_n-s_p). Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Sun_Circle_Loss_A_Unified_Perspective_of_Pair_Similarity_Optimization_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.10857 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Sun_Circle_Loss_A_Unified_Perspective_of_Pair_Similarity_Optimization_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Sun_Circle_Loss_A_Unified_Perspective_of_Pair_Similarity_Optimization_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
A Characteristic Function Approach to Deep Implicit Generative Modeling | Abdul Fatir Ansari, Jonathan Scarlett, Harold Soh | Implicit Generative Models (IGMs) such as GANs have emerged as effective data-driven models for generating samples, particularly images. In this paper, we formulate the problem of learning an IGM as minimizing the expected distance between characteristic functions. Specifically, we minimize the distance between characteristic functions of the real and generated data distributions under a suitably-chosen weighting distribution. This distance metric, which we term as the characteristic function distance (CFD), can be (approximately) computed with linear time-complexity in the number of samples, in contrast with the quadratic-time Maximum Mean Discrepancy (MMD). By replacing the discrepancy measure in the critic of a GAN with the CFD, we obtain a model that is simple to implement and stable to train. The proposed metric enjoys desirable theoretical properties including continuity and differentiability with respect to generator parameters, and continuity in the weak topology. We further propose a variation of the CFD in which the weighting distribution parameters are also optimized during training; this obviates the need for manual tuning, and leads to an improvement in test power relative to CFD. We demonstrate experimentally that our proposed method outperforms WGAN and MMD-GAN variants on a variety of unsupervised image generation benchmarks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Ansari_A_Characteristic_Function_Approach_to_Deep_Implicit_Generative_Modeling_CVPR_2020_paper.pdf | http://arxiv.org/abs/1909.07425 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Ansari_A_Characteristic_Function_Approach_to_Deep_Implicit_Generative_Modeling_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Ansari_A_Characteristic_Function_Approach_to_Deep_Implicit_Generative_Modeling_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Ansari_A_Characteristic_Function_CVPR_2020_supplemental.zip | null | null |
Bayesian Adversarial Human Motion Synthesis | Rui Zhao, Hui Su, Qiang Ji | We propose a generative probabilistic model for human motion synthesis. Our model has a hierarchy of three layers. At the bottom layer, we utilize Hidden semi-Markov Model (HSMM), which explicitly models the spatial pose, temporal transition and speed variations in motion sequences. At the middle layer, HSMM parameters are treated as random variables which are allowed to vary across data instances in order to capture large intra- and inter-class variations. At the top layer, hyperparameters define the prior distributions of parameters, preventing the model from overfitting. By explicitly capturing the distribution of the data and parameters, our model has a more compact parameterization compared to GAN-based generative models. We formulate the data synthesis as an adversarial Bayesian inference problem, in which the distributions of generator and discriminator parameters are obtained for data synthesis. We evaluate our method through a variety of metrics, where we show advantage than other competing methods with better fidelity and diversity. We further evaluate the synthesis quality as a data augmentation method for recognition task. Finally, we demonstrate the benefit of our fully probabilistic approach in data restoration task. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhao_Bayesian_Adversarial_Human_Motion_Synthesis_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhao_Bayesian_Adversarial_Human_Motion_Synthesis_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhao_Bayesian_Adversarial_Human_Motion_Synthesis_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhao_Bayesian_Adversarial_Human_CVPR_2020_supplemental.zip | null | null |
On Positive-Unlabeled Classification in GAN | Tianyu Guo, Chang Xu, Jiajun Huang, Yunhe Wang, Boxin Shi, Chao Xu, Dacheng Tao | This paper defines a positive and unlabeled classification problem for standard GANs, which then leads to a novel technique to stabilize the training of the discriminator in GANs. Traditionally, real data are taken as positive while generated data are negative. This positive-negative classification criterion was kept fixed all through the learning process of the discriminator without considering the gradually improved quality of generated data, even if they could be more realistic than real data at times. In contrast, it is more reasonable to treat the generated data as unlabeled, which could be positive or negative according to their quality. The discriminator is thus a classifier for this positive and unlabeled classification problem, and we derive a new Positive-Unlabeled GAN (PUGAN). We theoretically discuss the global optimality the proposed model will achieve and the equivalent optimization goal. Empirically, we find that PUGAN can achieve comparable or even better performance than those sophisticated discriminator stabilization methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Guo_On_Positive-Unlabeled_Classification_in_GAN_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.01136 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Guo_On_Positive-Unlabeled_Classification_in_GAN_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Guo_On_Positive-Unlabeled_Classification_in_GAN_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Guo_On_Positive-Unlabeled_Classification_CVPR_2020_supplemental.pdf | null | null |
A Unified Object Motion and Affinity Model for Online Multi-Object Tracking | Junbo Yin, Wenguan Wang, Qinghao Meng, Ruigang Yang, Jianbing Shen | Current popular online multi-object tracking (MOT) solutions apply single object trackers (SOTs) to capture object motions, while often requiring an extra affinity network to associate objects, especially for the occluded ones. This brings extra computational overhead due to repetitive feature extraction for SOT and affinity computation. Meanwhile, the model size of the sophisticated affinity network is usually non-trivial. In this paper, we propose a novel MOT framework that unifies object motion and affinity model into a single network, named UMA, in order to learn a compact feature that is discriminative for both object motion and affinity measure. In particular, UMA integrates single object tracking and metric learning into a unified triplet network by means of multi-task learning. Such design brings advantages of improved computation efficiency, low memory requirement and simplified training procedure. In addition, we equip our model with a task-specific attention module, which is used to boost task-aware feature learning. The proposed UMA can be easily trained end-to-end, and is elegant - requiring only one training stage. Experimental results show that it achieves promising performance on several MOT Challenge benchmarks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yin_A_Unified_Object_Motion_and_Affinity_Model_for_Online_Multi-Object_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.11291 | https://www.youtube.com/watch?v=z7iBQpwjfk8 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yin_A_Unified_Object_Motion_and_Affinity_Model_for_Online_Multi-Object_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yin_A_Unified_Object_Motion_and_Affinity_Model_for_Online_Multi-Object_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Image2StyleGAN++: How to Edit the Embedded Images? | Rameen Abdal, Yipeng Qin, Peter Wonka | We propose Image2StyleGAN++, a flexible image editing framework with many applications. Our framework extends the recent Image2StyleGAN in three ways. First, we introduce noise optimization as a complement to the W+ latent space embedding. Our noise optimization can restore high frequency features in images and thus significantly improves the quality of reconstructed images, e.g. a big increase of PSNR from 20 dB to 45 dB. Second, we extend the global W+ latent space embedding to enable local embeddings. Third, we combine embedding with activation tensor manipulation to perform high quality local edits along with global semantic edits on images. Such edits motivate various high quality image editing applications, e.g. image reconstruction, image inpainting, image crossover, local style transfer, image editing using scribbles, and attribute level feature transfer. Examples of the edited images are shown across the paper for visual inspection. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Abdal_Image2StyleGAN_How_to_Edit_the_Embedded_Images_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Abdal_Image2StyleGAN_How_to_Edit_the_Embedded_Images_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Abdal_Image2StyleGAN_How_to_Edit_the_Embedded_Images_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Abdal_Image2StyleGAN_How_to_CVPR_2020_supplemental.pdf | null | null |
Efficient and Robust Shape Correspondence via Sparsity-Enforced Quadratic Assignment | Rui Xiang, Rongjie Lai, Hongkai Zhao | In this work, we introduce a novel local pairwise descriptor and then develop a simple, effective iterative method to solve the resulting quadratic assignment through sparsity control for shape correspondence between two approximate isometric surfaces. Our pairwise descriptor is based on the stiffness and mass matrix of finite element approximation of the Laplace-Beltrami differential operator, which is local in space, sparse to represent, and extremely easy to compute while containing global information. It allows us to deal with open surfaces, partial matching, and topological perturbations robustly. To solve the resulting quadratic assignment problem efficiently, the two key ideas of our iterative algorithm are: 1) select pairs with good (approximate) correspondence as anchor points, 2) solve a regularized quadratic assignment problem only in the neighborhood of selected anchor points through sparsity control. These two ingredients can improve and increase the number of anchor points quickly while reducing the computation cost in each quadratic assignment iteration significantly. With enough high-quality anchor points, one may use various pointwise global features with reference to these anchor points to further improve the dense shape correspondence. We use various experiments to show the efficiency, quality, and versatility of our method on large data sets, patches, and point clouds (without global meshes). | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xiang_Efficient_and_Robust_Shape_Correspondence_via_Sparsity-Enforced_Quadratic_Assignment_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.08680 | https://www.youtube.com/watch?v=x9yR5esPcUg | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xiang_Efficient_and_Robust_Shape_Correspondence_via_Sparsity-Enforced_Quadratic_Assignment_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xiang_Efficient_and_Robust_Shape_Correspondence_via_Sparsity-Enforced_Quadratic_Assignment_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation | Yang Zhang, Zixiang Zhou, Philip David, Xiangyu Yue, Zerong Xi, Boqing Gong, Hassan Foroosh | The requirement of fine-grained perception by autonomous driving systems has resulted in recently increased research in the online semantic segmentation of single-scan LiDAR. Emerging datasets and technological advancements have enabled researchers to benchmark this problem and improve the applicable semantic segmentation algorithms. Still, online semantic segmentation of LiDAR scans in autonomous driving applications remains challenging due to three reasons: (1) the need for near-real-time latency with limited hardware, (2) points are distributed unevenly across space, and (3) an increasing number of more fine-grained semantic classes. The combination of the aforementioned challenges motivates us to propose a new LiDAR-specific, KNN-free segmentation algorithm - PolarNet. Instead of using common spherical or bird's-eye-view projection, our polar bird's-eye-view representation balances the points per grid and thus indirectly redistributes the network's attention over the long-tailed points distribution over the radial axis in polar coordination. We find that our encoding scheme greatly increases the mIoU in three drastically different real urban LiDAR single-scan segmentation datasets while retaining ultra low latency and near real-time throughput. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_PolarNet_An_Improved_Grid_Representation_for_Online_LiDAR_Point_Clouds_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.14032 | https://www.youtube.com/watch?v=iIhttRSMqjE | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_PolarNet_An_Improved_Grid_Representation_for_Online_LiDAR_Point_Clouds_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_PolarNet_An_Improved_Grid_Representation_for_Online_LiDAR_Point_Clouds_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
CascadePSP: Toward Class-Agnostic and Very High-Resolution Segmentation via Global and Local Refinement | Ho Kei Cheng, Jihoon Chung, Yu-Wing Tai, Chi-Keung Tang | State-of-the-art semantic segmentation methods were almost exclusively trained on images within a fixed resolution range. These segmentations are inaccurate for very high-resolution images since using bicubic upsampling of low-resolution segmentation does not adequately capture high-resolution details along object boundaries. In this paper, we propose a novel approach to address the high-resolution segmentation problem without using any high-resolution training data. The key insight is our CascadePSP network which refines and corrects local boundaries whenever possible. Although our network is trained with low-resolution segmentation data, our method is applicable to any resolution even for very high-resolution images larger than 4K. We present quantitative and qualitative studies on different datasets to show that CascadePSP can reveal pixel-accurate segmentation boundaries using our novel refinement module without any finetuning. Thus, our method can be regarded as class-agnostic. Finally, we demonstrate the application of our model to scene parsing in multi-class segmentation. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Cheng_CascadePSP_Toward_Class-Agnostic_and_Very_High-Resolution_Segmentation_via_Global_and_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.02551 | https://www.youtube.com/watch?v=VLEQB5QUOtQ | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Cheng_CascadePSP_Toward_Class-Agnostic_and_Very_High-Resolution_Segmentation_via_Global_and_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Cheng_CascadePSP_Toward_Class-Agnostic_and_Very_High-Resolution_Segmentation_via_Global_and_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Cheng_CascadePSP_Toward_Class-Agnostic_CVPR_2020_supplemental.pdf | null | null |
GHUM & GHUML: Generative 3D Human Shape and Articulated Pose Models | Hongyi Xu, Eduard Gabriel Bazavan, Andrei Zanfir, William T. Freeman, Rahul Sukthankar, Cristian Sminchisescu | We present a statistical, articulated 3D human shape modeling pipeline, within a fully trainable, modular, deep learning framework. Given high-resolution complete 3D body scans of humans, captured in various poses, together with additional closeups of their head and facial expressions, as well as hand articulation, and given initial, artist designed, gender neutral rigged quad-meshes, we train all model parameters including non-linear shape spaces based on variational auto-encoders, pose-space deformation correctives, skeleton joint center predictors, and blend skinning functions, in a single consistent learning loop. The models are simultaneously trained with all the 3d dynamic scan data (over 60,000 diverse human configurations in our new dataset) in order to capture correlations and ensure consistency of various components. Models support facial expression analysis, as well as body (with detailed hand) shape and pose estimation. We provide fully train-able generic human models of different resolutions- the moderate-resolution GHUM consisting of 10,168 vertices and the low-resolution GHUML(ite) of 3,194 vertices-, run comparisons between them, analyze the impact of different components and illustrate their reconstruction from image data. The models will be available for research. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xu_GHUM__GHUML_Generative_3D_Human_Shape_and_Articulated_Pose_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_GHUM__GHUML_Generative_3D_Human_Shape_and_Articulated_Pose_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_GHUM__GHUML_Generative_3D_Human_Shape_and_Articulated_Pose_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Panoptic-Based Image Synthesis | Aysegul Dundar, Karan Sapra, Guilin Liu, Andrew Tao, Bryan Catanzaro | Conditional image synthesis for generating photorealistic images serves various applications for content editing to content generation. Previous conditional image synthesis algorithms mostly rely on semantic maps, and often fail in complex environments where multiple instances occlude each other. We propose a panoptic aware image synthesis network to generate high fidelity and photorealistic images conditioned on panoptic maps which unify semantic and instance information. To achieve this, we efficiently use panoptic maps in convolution and upsampling layers. We show that with the proposed changes to the generator, we can improve on the previous state-of-the-art methods by generating images in complex instance interaction environments in higher fidelity and tiny objects in more details. Furthermore, our proposed method also outperforms the previous state-of-the-art methods in metrics of mean IoU (Intersection over Union), and detAP (Detection Average Precision). | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Dundar_Panoptic-Based_Image_Synthesis_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.10289 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Dundar_Panoptic-Based_Image_Synthesis_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Dundar_Panoptic-Based_Image_Synthesis_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Unity Style Transfer for Person Re-Identification | Chong Liu, Xiaojun Chang, Yi-Dong Shen | Style variation has been a major challenge for person re-identification, which aims to match the same pedestrians across different cameras. Existing works attempted to address this problem with camera-invariant descriptor subspace learning. However, there will be more image artifacts when the difference between the images taken by different cameras is larger. To solve this problem, we propose a UnityStyle adaption method, which can smooth the style disparities within the same camera and across different cameras. Specifically, we firstly create UnityGAN to learn the style changes between cameras, producing shape-stable style-unity images for each camera, which is called UnityStyle images. Meanwhile, we use UnityStyle images to eliminate style differences between different images, which makes a better match between query and gallery. Then, we apply the proposed method to Re-ID models, expecting to obtain more style-robust depth features for querying. We conduct extensive experiments on widely used benchmark datasets to evaluate the performance of the proposed framework, the results of which confirm the superiority of the proposed model. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Unity_Style_Transfer_for_Person_Re-Identification_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.02068 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Unity_Style_Transfer_for_Person_Re-Identification_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Unity_Style_Transfer_for_Person_Re-Identification_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Minimal Solvers for 3D Scan Alignment With Pairs of Intersecting Lines | Andre Mateus, Srikumar Ramalingam, Pedro Miraldo | We explore the possibility of using line intersection constraints for 3D scan registration. Typical 3D registration algorithms exploit point and plane correspondences, while line intersection constraints have not been used in the context of 3D scan registration before. Constraints from a match of pairs of intersecting lines in two 3D scans can be seen as two 3D line intersections, a plane correspondence, and a point correspondence. In this paper, we present minimal solvers that combine these different type of constraints: 1) three line intersections and one point match; 2) one line intersection and two point matches; 3) three line intersections and one plane match; 4) one line intersection and two plane matches; and 5) one line intersection, one point match, and one plane match. To use all the available solvers, we present a hybrid RANSAC loop. We propose a non-linear refinement technique using all the inliers obtained from the RANSAC. Vast experiments with simulated data and two real-data data-sets show that the use of these features and the combined solvers improve the accuracy. The code is available. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Mateus_Minimal_Solvers_for_3D_Scan_Alignment_With_Pairs_of_Intersecting_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Mateus_Minimal_Solvers_for_3D_Scan_Alignment_With_Pairs_of_Intersecting_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Mateus_Minimal_Solvers_for_3D_Scan_Alignment_With_Pairs_of_Intersecting_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Mateus_Minimal_Solvers_for_CVPR_2020_supplemental.pdf | null | null |
Distilling Knowledge From Graph Convolutional Networks | Yiding Yang, Jiayan Qiu, Mingli Song, Dacheng Tao, Xinchao Wang | Existing knowledge distillation methods focus on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, and have largely overlooked graph convolutional networks (GCN) that handle non-grid data. In this paper, we propose to our best knowledge the first dedicated approach to distilling knowledge from a pre-trained GCN model. To enable the knowledge transfer from the teacher GCN to the student, we propose a local structure preserving module that explicitly accounts for the topological semantics of the teacher. In this module, the local structure information from both the teacher and the student are extracted as distributions, and hence minimizing the distance between these distributions enables topology-aware knowledge transfer from the teacher, yielding a compact yet high-performance student model. Moreover, the proposed approach is readily extendable to dynamic graph models, where the input graphs for the teacher and the student may differ. We evaluate the proposed method on two different datasets using GCN models of different architectures, and demonstrate that our method achieves the state-of-the-art knowledge distillation performance for GCN models. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_Distilling_Knowledge_From_Graph_Convolutional_Networks_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.10477 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Distilling_Knowledge_From_Graph_Convolutional_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Distilling_Knowledge_From_Graph_Convolutional_Networks_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Learning Oracle Attention for High-Fidelity Face Completion | Tong Zhou, Changxing Ding, Shaowen Lin, Xinchao Wang, Dacheng Tao | High-fidelity face completion is a challenging task due to the rich and subtle facial textures involved. What makes it more complicated is the correlations between different facial components, for example, the symmetry in texture and structure between both eyes. While recent works adopted the attention mechanism to learn the contextual relations among elements of the face, they have largely overlooked the disastrous impacts of inaccurate attention scores; in addition, they fail to pay sufficient attention to key facial components, the completion results of which largely determine the authenticity of a face image. Accordingly, in this paper, we design a comprehensive framework for face completion based on the U-Net structure. Specifically, we propose a dual spatial attention module to efficiently learn the correlations between facial textures at multiple scales; moreover, we provide an oracle supervision signal to the attention module to ensure that the obtained attention scores are reasonable. Furthermore, we take the location of the facial components as prior knowledge and impose a multi-discriminator on these regions, with which the fidelity of facial components is significantly promoted. Extensive experiments on two high-resolution face datasets including CelebA-HQ and Flickr-Faces-HQ demonstrate that the proposed approach outperforms state-of-the-art methods by large margins. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhou_Learning_Oracle_Attention_for_High-Fidelity_Face_Completion_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.13903 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Learning_Oracle_Attention_for_High-Fidelity_Face_Completion_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Learning_Oracle_Attention_for_High-Fidelity_Face_Completion_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhou_Learning_Oracle_Attention_CVPR_2020_supplemental.pdf | null | null |
Image Super-Resolution With Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining | Yiqun Mei, Yuchen Fan, Yuqian Zhou, Lichao Huang, Thomas S. Huang, Honghui Shi | Deep convolution-based single image super-resolution (SISR) networks embrace the benefits of learning from large-scale external image resources for local recovery, yet most existing works have ignored the long-range feature-wise similarities in natural images. Some recent works have successfully leveraged this intrinsic feature correlation by exploring non-local attention modules. However, none of the current deep models have studied another inherent property of images: cross-scale feature correlation. In this paper, we propose the first Cross-Scale Non-Local (CS-NL) attention module with integration into a recurrent neural network. By combining the new CS-NL prior with local and in-scale non-local priors in a powerful recurrent fusion cell, we can find more cross-scale feature correlations within a single low-resolution (LR) image. The performance of SISR is significantly improved by exhaustively integrating all possible priors. Extensive experiments demonstrate the effectiveness of the proposed CS-NL module by setting new state-of-the-arts on multiple SISR benchmarks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Mei_Image_Super-Resolution_With_Cross-Scale_Non-Local_Attention_and_Exhaustive_Self-Exemplars_Mining_CVPR_2020_paper.pdf | http://arxiv.org/abs/2006.01424 | https://www.youtube.com/watch?v=1eJ2aOEKv58 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Mei_Image_Super-Resolution_With_Cross-Scale_Non-Local_Attention_and_Exhaustive_Self-Exemplars_Mining_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Mei_Image_Super-Resolution_With_Cross-Scale_Non-Local_Attention_and_Exhaustive_Self-Exemplars_Mining_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Mei_Image_Super-Resolution_With_CVPR_2020_supplemental.pdf | null | null |
On the Regularization Properties of Structured Dropout | Ambar Pal, Connor Lane, Rene Vidal, Benjamin D. Haeffele | Dropout and its extensions (e.g. DropBlock and DropConnect) are popular heuristics for training neural networks, which have been shown to improve generalization performance in practice. However, a theoretical understanding of their optimization and regularization properties remains elusive. Recent work shows that in the case of single hidden-layer linear networks, Dropout is a stochastic gradient descent method for minimizing a regularized loss, and that the regularizer induces solutions that are low-rank and balanced. In this work we show that for single hidden-layer linear networks, DropBlock induces spectral k-support norm regularization, and promotes solutions that are low-rank and have factors with equal norm. We also show that the global minimizer for DropBlock can be computed in closed form, and that DropConnect is equivalent to Dropout. We then show that some of these results can be extended to a general class of Dropout-strategies, and, with some assumptions, to deep non-linear networks when Dropout is applied to the last layer. We verify our theoretical claims and assumptions experimentally with commonly used network architectures. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Pal_On_the_Regularization_Properties_of_Structured_Dropout_CVPR_2020_paper.pdf | http://arxiv.org/abs/1910.14186 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Pal_On_the_Regularization_Properties_of_Structured_Dropout_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Pal_On_the_Regularization_Properties_of_Structured_Dropout_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Pal_On_the_Regularization_CVPR_2020_supplemental.pdf | null | null |
Deep Geometric Functional Maps: Robust Feature Learning for Shape Correspondence | Nicolas Donati, Abhishek Sharma, Maks Ovsjanikov | We present a novel learning-based approach for computing correspondences between non-rigid 3D shapes. Unlike previous methods that either require extensive training data or operate on handcrafted input descriptors and thus generalize poorly across diverse datasets, our approach is both accurate and robust to changes in shape structure. Key to our method is a feature-extraction network that learns directly from raw shape geometry, combined with a novel regularized map extraction layer and loss, based on the functional map representation. We demonstrate through extensive experiments in challenging shape matching scenarios that our method can learn from less training data than existing supervised approaches and generalizes significantly better than current descriptor-based learning methods. Our source code is available at: https://github.com/LIX-shape-analysis/GeomFmaps. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Donati_Deep_Geometric_Functional_Maps_Robust_Feature_Learning_for_Shape_Correspondence_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.14286 | https://www.youtube.com/watch?v=_K15Gg7MNTY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Donati_Deep_Geometric_Functional_Maps_Robust_Feature_Learning_for_Shape_Correspondence_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Donati_Deep_Geometric_Functional_Maps_Robust_Feature_Learning_for_Shape_Correspondence_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Donati_Deep_Geometric_Functional_CVPR_2020_supplemental.pdf | null | null |
Iteratively-Refined Interactive 3D Medical Image Segmentation With Multi-Agent Reinforcement Learning | Xuan Liao, Wenhao Li, Qisen Xu, Xiangfeng Wang, Bo Jin, Xiaoyun Zhang, Yanfeng Wang, Ya Zhang | Existing automatic 3D image segmentation methods usually fail to meet the clinic use. Many studies have explored an interactive strategy to improve the image segmentation performance by iteratively incorporating user hints. However, the dynamic process for successive interactions is largely ignored. We here propose to model the dynamic process of iterative interactive image segmentation as a Markov decision process (MDP) and solve it with reinforcement learning (RL). Unfortunately, it is intractable to use single-agent RL for voxel-wise prediction due to the large exploration space. To reduce the exploration space to a tractable size, we treat each voxel as an agent with a shared voxel-level behavior strategy so that it can be solved with multi-agent reinforcement learning. An additional advantage of this multi-agent model is to capture the dependency among voxels for segmentation task. Meanwhile, to enrich the information of previous segmentations, we reserve the prediction uncertainty in the state space of MDP and derive an adjustment action space leading to a more precise and finer segmentation. In addition, to improve the efficiency of exploration, we design a relative cross-entropy gain-based reward to update the policy in a constrained direction. Experimental results on various medical datasets have shown that our method significantly outperforms existing state-of-the-art methods, with the advantage of less interactions and a faster convergence. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liao_Iteratively-Refined_Interactive_3D_Medical_Image_Segmentation_With_Multi-Agent_Reinforcement_Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.10334 | https://www.youtube.com/watch?v=z9ouWEwyTPg | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liao_Iteratively-Refined_Interactive_3D_Medical_Image_Segmentation_With_Multi-Agent_Reinforcement_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liao_Iteratively-Refined_Interactive_3D_Medical_Image_Segmentation_With_Multi-Agent_Reinforcement_Learning_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Editing in Style: Uncovering the Local Semantics of GANs | Edo Collins, Raja Bala, Bob Price, Sabine Susstrunk | While the quality of GAN image synthesis has improved tremendously in recent years, our ability to control and condition the output is still limited. Focusing on StyleGAN, we introduce a simple and effective method for making local, semantically-aware edits to a target output image. This is accomplished by borrowing elements from a source image, also a GAN output, via a novel manipulation of style vectors. Our method requires neither supervision from an external model, nor involves complex spatial morphing operations. Instead, it relies on the emergent disentanglement of semantic objects that is learned by StyleGAN during its training. Semantic editing is demonstrated on GANs producing human faces, indoor scenes, cats, and cars. We measure the locality and photorealism of the edits produced by our method, and find that it accomplishes both. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Collins_Editing_in_Style_Uncovering_the_Local_Semantics_of_GANs_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.14367 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Collins_Editing_in_Style_Uncovering_the_Local_Semantics_of_GANs_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Collins_Editing_in_Style_Uncovering_the_Local_Semantics_of_GANs_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Collins_Editing_in_Style_CVPR_2020_supplemental.pdf | null | null |
A Graduated Filter Method for Large Scale Robust Estimation | Huu Le, Christopher Zach | Due to the highly non-convex nature of large-scale robust parameter estimation, avoiding poor local minima is challenging in real-world applications where input data is contaminated by a large or unknown fraction of outliers. In this paper, we introduce a novel solver for robust estimation that possesses a strong ability to escape poor local minima. Our algorithm is built upon the class of traditional graduated optimization techniques, which are considered state-of-the-art local methods to solve problems having many poor minima. The novelty of our work lies in the introduction of an adaptive kernel (or residual) scaling scheme, which allows us to achieve faster convergence rates. Like other existing methods that aim to return good local minima for robust estimation tasks, our method relaxes the original robust problem, but adapts a filter framework from non-linear constrained optimization to automatically choose the level of relaxation. Experimental results on real large-scale datasets such as bundle adjustment instances demonstrate that our proposed method achieves competitive results. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Le_A_Graduated_Filter_Method_for_Large_Scale_Robust_Estimation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.09080 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Le_A_Graduated_Filter_Method_for_Large_Scale_Robust_Estimation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Le_A_Graduated_Filter_Method_for_Large_Scale_Robust_Estimation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Le_A_Graduated_Filter_CVPR_2020_supplemental.pdf | null | null |
Discovering Synchronized Subsets of Sequences: A Large Scale Solution | Evangelos Sariyanidi, Casey J. Zampella, Keith G. Bartley, John D. Herrington, Theodore D. Satterthwaite, Robert T. Schultz, Birkan Tunc | Finding the largest subset of sequences (i.e., time series) that are correlated above a certain threshold, within large datasets, is of significant interest for computer vision and pattern recognition problems across domains, including behavior analysis, computational biology, neuroscience, and finance. Maximal clique algorithms can be used to solve this problem, but they are not scalable. We present an approximate, but highly efficient and scalable, method that represents the search space as a union of sets called epsilon-expanded clusters, one of which is theoretically guaranteed to contain the largest subset of synchronized sequences. The method finds synchronized sets by fitting a Euclidean ball on epsilon-expanded clusters, using Jung's theorem. We validate the method on data from the three distinct domains of facial behavior analysis, finance, and neuroscience, where we respectively discover the synchrony among pixels of face videos, stock market item prices, and dynamic brain connectivity data. Experiments show that our method produces results comparable to, but up to 300 times faster than, maximal clique algorithms, with speed gains increasing exponentially with the number of input sequences. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Sariyanidi_Discovering_Synchronized_Subsets_of_Sequences_A_Large_Scale_Solution_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Sariyanidi_Discovering_Synchronized_Subsets_of_Sequences_A_Large_Scale_Solution_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Sariyanidi_Discovering_Synchronized_Subsets_of_Sequences_A_Large_Scale_Solution_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Sariyanidi_Discovering_Synchronized_Subsets_CVPR_2020_supplemental.pdf | null | null |
DeepCap: Monocular Human Performance Capture Using Weak Supervision | Marc Habermann, Weipeng Xu, Michael Zollhofer, Gerard Pons-Moll, Christian Theobalt | Human performance capture is a highly important computer vision problem with many applications in movie production and virtual/augmented reality. Many previous performance capture approaches either required expensive multi-view setups or did not recover dense space-time coherent geometry with frame-to-frame correspondences. We propose a novel deep learning approach for monocular dense human performance capture. Our method is trained in a weakly supervised manner based on multi-view supervision completely removing the need for training data with 3D ground truth annotations. The network architecture is based on two separate networks that disentangle the task into a pose estimation and a non-rigid surface deformation step. Extensive qualitative and quantitative evaluations show that our approach outperforms the state of the art in terms of quality and robustness. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Habermann_DeepCap_Monocular_Human_Performance_Capture_Using_Weak_Supervision_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Habermann_DeepCap_Monocular_Human_Performance_Capture_Using_Weak_Supervision_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Habermann_DeepCap_Monocular_Human_Performance_Capture_Using_Weak_Supervision_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Habermann_DeepCap_Monocular_Human_CVPR_2020_supplemental.pdf | null | null |
Learning Physics-Guided Face Relighting Under Directional Light | Thomas Nestmeyer, Jean-Francois Lalonde, Iain Matthews, Andreas Lehrmann | Relighting is an essential step in realistically transferring objects from a captured image into another environment. For example, authentic telepresence in Augmented Reality requires faces to be displayed and relit consistent with the observer's scene lighting. We investigate end-to-end deep learning architectures that both de-light and relight an image of a human face. Our model decomposes the input image into intrinsic components according to a diffuse physics-based image formation model. We enable non-diffuse effects including cast shadows and specular highlights by predicting a residual correction to the diffuse render. To train and evaluate our model, we collected a portrait database of 21 subjects with various expressions and poses. Each sample is captured in a controlled light stage setup with 32 individual light sources. Our method creates precise and believable relighting results and generalizes to complex illumination conditions and challenging poses, including when the subject is not looking straight at the camera. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Nestmeyer_Learning_Physics-Guided_Face_Relighting_Under_Directional_Light_CVPR_2020_paper.pdf | http://arxiv.org/abs/1906.03355 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Nestmeyer_Learning_Physics-Guided_Face_Relighting_Under_Directional_Light_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Nestmeyer_Learning_Physics-Guided_Face_Relighting_Under_Directional_Light_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Unsupervised Representation Learning for Gaze Estimation | Yu Yu, Jean-Marc Odobez | Although automatic gaze estimation is very important to a large variety of application areas, it is difficult to train accurate and robust gaze models, in great part due to the difficulty in collecting large and diverse data (annotating 3D gaze is expensive and existing datasets use different setups). To address this issue, our main contribution in this paper is to propose an effective approach to learn a low dimensional gaze representation without gaze annotations, which to the best of our best knowledge, is the first work to do so. The main idea is to rely on a gaze redirection network and use the gaze representation difference of the input and target images (of the redirection network) as the redirection variable. A redirection loss in image domain allows the joint training of both the redirection network and the gaze representation network. In addition, we propose a warping field regularization which not only provides an explicit physical meaning to the gaze representations but also avoids redirection distortions. Promising results on few-shot gaze estimation (competitive results can be achieved with as few as <= 100 calibration samples), cross-dataset gaze estimation, gaze network pretraining, and another task (head pose estimation) demonstrate the validity of our framework. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yu_Unsupervised_Representation_Learning_for_Gaze_Estimation_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.06939 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Unsupervised_Representation_Learning_for_Gaze_Estimation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Unsupervised_Representation_Learning_for_Gaze_Estimation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yu_Unsupervised_Representation_Learning_CVPR_2020_supplemental.pdf | null | null |
Learning Better Lossless Compression Using Lossy Compression | Fabian Mentzer, Luc Van Gool, Michael Tschannen | We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system. Specifically, the original image is first decomposed into the lossy reconstruction obtained after compressing it with BPG and the corresponding residual. We then model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction, and combine it with entropy coding to losslessly encode the residual. Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder. The resulting compression system achieves state-of-the-art performance in learned lossless full-resolution image compression, outperforming previous learned approaches as well as PNG, WebP, and JPEG2000. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Mentzer_Learning_Better_Lossless_Compression_Using_Lossy_Compression_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.10184 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Mentzer_Learning_Better_Lossless_Compression_Using_Lossy_Compression_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Mentzer_Learning_Better_Lossless_Compression_Using_Lossy_Compression_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Mentzer_Learning_Better_Lossless_CVPR_2020_supplemental.pdf | null | null |
Dynamic Hierarchical Mimicking Towards Consistent Optimization Objectives | Duo Li, Qifeng Chen | While the depth of modern Convolutional Neural Networks (CNNs) surpasses that of the pioneering networks with a significant margin, the traditional way of appending supervision only over the final classifier and progressively propagating gradient flow upstream remains the training mainstay. Seminal Deeply-Supervised Networks (DSN) were proposed to alleviate the difficulty of optimization arising from gradient flow through a long chain. However, it is still vulnerable to issues including interference to the hierarchical representation generation process and inconsistent optimization objectives, as illustrated theoretically and empirically in this paper. Complementary to previous training strategies, we propose Dynamic Hierarchical Mimicking, a generic feature learning mechanism, to advance CNN training with enhanced generalization ability. Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network. Each branch can emerge from certain locations of the main branch dynamically, which not only retains representation rooted in the backbone network but also generates more diverse representations along its own pathway. We go one step further to promote multi-level interactions among different branches through an optimization formula with probabilistic prediction matching losses, thus guaranteeing a more robust optimization process and better representation ability. Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method over its corresponding counterparts using diverse state-of-the-art CNN architectures. Code and models are publicly available at https://github.com/d-li14/DHM. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Dynamic_Hierarchical_Mimicking_Towards_Consistent_Optimization_Objectives_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.10739 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Dynamic_Hierarchical_Mimicking_Towards_Consistent_Optimization_Objectives_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Dynamic_Hierarchical_Mimicking_Towards_Consistent_Optimization_Objectives_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Dynamic_Hierarchical_Mimicking_CVPR_2020_supplemental.pdf | null | null |
UCTGAN: Diverse Image Inpainting Based on Unsupervised Cross-Space Translation | Lei Zhao, Qihang Mo, Sihuan Lin, Zhizhong Wang, Zhiwen Zuo, Haibo Chen, Wei Xing, Dongming Lu | Although existing image inpainting approaches have been able to produce visually realistic and semantically correct results, they produce only one result for each masked input. In order to produce multiple and diverse reasonable solutions, we present Unsupervised Cross-space Translation Generative Adversarial Network (called UCTGAN) which mainly consists of three network modules: conditional encoder module, manifold projection module and generation module. The manifold projection module and the generation module are combined to learn one-to-one image mapping between two spaces in an unsupervised way by projecting instance image space and conditional completion image space into common low-dimensional manifold space, which can greatly improve the diversity of the repaired samples. For understanding of global information, we also introduce a new cross semantic attention layer that exploits the long-range dependencies between the known parts and the completed parts, which can improve realism and appearance consistency of repaired samples. Extensive experiments on various datasets such as CelebA-HQ, Places2, Paris Street View and ImageNet clearly demonstrate that our method not only generates diverse inpainting solutions from the same image to be repaired, but also has high image quality. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhao_UCTGAN_Diverse_Image_Inpainting_Based_on_Unsupervised_Cross-Space_Translation_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=vFJzQQDmIDM | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhao_UCTGAN_Diverse_Image_Inpainting_Based_on_Unsupervised_Cross-Space_Translation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhao_UCTGAN_Diverse_Image_Inpainting_Based_on_Unsupervised_Cross-Space_Translation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Reciprocal Learning Networks for Human Trajectory Prediction | Hao Sun, Zhiqun Zhao, Zhihai He | We observe that the human trajectory is not only forward predictable, but also backward predictable. Both forward and backward trajectories follow the same social norms and obey the same physical constraints with the only difference in their time directions. Based on this unique property, we develop a new approach, called reciprocal learning, for human trajectory prediction. Two networks, forward and backward prediction networks, are tightly coupled, satisfying the reciprocal constraint, which allows them to be jointly learned. Based on this constraint, we borrow the concept of adversarial attacks of deep neural networks, which iteratively modifies the input of the network to match the given or forced network output, and develop a new method for network prediction, called reciprocal attack for matched prediction. It further improves the prediction accuracy. Our experimental results on benchmark datasets demonstrate that our new method outperforms the state-of-the-art methods for human trajectory prediction. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Sun_Reciprocal_Learning_Networks_for_Human_Trajectory_Prediction_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.04340 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Sun_Reciprocal_Learning_Networks_for_Human_Trajectory_Prediction_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Sun_Reciprocal_Learning_Networks_for_Human_Trajectory_Prediction_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Towards Universal Representation Learning for Deep Face Recognition | Yichun Shi, Xiang Yu, Kihyuk Sohn, Manmohan Chandraker, Anil K. Jain | Recognizing wild faces is extremely hard as they appear with all kinds of variations. Traditional methods either train with specifically annotated variation data from target domains, or by introducing unlabeled target variation data to adapt from the training data. Instead, we propose a universal representation learning framework that can deal with larger variation unseen in the given training data without leveraging target domain knowledge. We firstly synthesize training data alongside some semantically meaningful variations, such as low resolution, occlusion and head pose. However, directly feeding the augmented data for training will not converge well as the newly introduced samples are mostly hard examples. We propose to split the feature embedding into multiple sub-embeddings, and associate different confidence values for each sub-embedding to smooth the training procedure. The sub-embeddings are further decorrelated by regularizing variation classification loss and variation adversarial loss on different partitions of them. Experiments show that our method achieves top performance on general face recognition datasets such as LFW and MegaFace, while significantly better on extreme benchmarks such as TinyFace and IJB-S. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Shi_Towards_Universal_Representation_Learning_for_Deep_Face_Recognition_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.11841 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Shi_Towards_Universal_Representation_Learning_for_Deep_Face_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Shi_Towards_Universal_Representation_Learning_for_Deep_Face_Recognition_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Shi_Towards_Universal_Representation_CVPR_2020_supplemental.pdf | null | null |
Minimal Solutions to Relative Pose Estimation From Two Views Sharing a Common Direction With Unknown Focal Length | Yaqing Ding, Jian Yang, Jean Ponce, Hui Kong | We propose minimal solutions to relative pose estimation problem from two views sharing a common direction with unknown focal length. This is relevant for cameras equipped with an IMU (inertial measurement unit), e.g., smart phones, tablets. Similar to the 6-point algorithm for two cameras with unknown but equal focal lengths and 7-point algorithm for two cameras with different and unknown focal lengths, we derive new 4- and 5-point algorithms for these two cases, respectively. The proposed algorithms can cope with coplanar points, which is a degenerate configuration for these 6- and 7-point counterparts. We present a detailed analysis and comparisons with the state of the art. Experimental results on both synthetic data and real images from a smart phone demonstrate the usefulness of the proposed algorithms. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Ding_Minimal_Solutions_to_Relative_Pose_Estimation_From_Two_Views_Sharing_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=JwH1kX3TiCg | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Ding_Minimal_Solutions_to_Relative_Pose_Estimation_From_Two_Views_Sharing_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Ding_Minimal_Solutions_to_Relative_Pose_Estimation_From_Two_Views_Sharing_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Ding_Minimal_Solutions_to_CVPR_2020_supplemental.pdf | null | null |
Deep Fair Clustering for Visual Learning | Peizhao Li, Han Zhao, Hongfu Liu | Fair clustering aims to hide sensitive attributes during data partition by balancing the distribution of protected subgroups in each cluster. Existing work attempts to address this problem by reducing it to a classical balanced clustering with a constraint on the proportion of protected subgroups of the input space. However, the input space may limit the clustering performance, and so far only low-dimensional datasets have been considered. In light of these limitations, in this paper, we propose Deep Fair Clustering (DFC) to learn fair and clustering-favorable representations for clustering simultaneously. Our approach could effectively filter out sensitive attributes from representations, and also lead to representations that are amenable for the following cluster analysis. Theoretically, we show that our fairness constraint in DFC will not incur much loss in terms of several clustering metrics. Empirically, we provide extensive experimental demonstrations on four visual datasets to corroborate the superior performance of the proposed approach over existing fair clustering and deep clustering methods on both cluster validity and fairness criterion. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Deep_Fair_Clustering_for_Visual_Learning_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Deep_Fair_Clustering_for_Visual_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Deep_Fair_Clustering_for_Visual_Learning_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Deep_Fair_Clustering_CVPR_2020_supplemental.pdf | null | null |
Rotation Consistent Margin Loss for Efficient Low-Bit Face Recognition | Yudong Wu, Yichao Wu, Ruihao Gong, Yuanhao Lv, Ken Chen, Ding Liang, Xiaolin Hu, Xianglong Liu, Junjie Yan | In this paper, we consider the low-bit quantization problem of face recognition (FR) under the open-set protocol. Different from well explored low-bit quantization on closed-set image classification task, the open-set task is more sensitive to quantization errors (QEs). We redefine the QEs in angular space and disentangle it into class error and individual error. These two parts correspond to inter-class separability and intra-class compactness, respectively. Instead of eliminating the entire QEs, we propose the rotation consistent margin (RCM) loss to minimize the individual error, which is more essential to feature discriminative power. Extensive experiments on popular benchmark datasets such as MegaFace Challenge, Youtube Faces (YTF), Labeled Face in the Wild (LFW) and IJB-C show the superiority of proposed loss in low-bit FR quantization tasks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wu_Rotation_Consistent_Margin_Loss_for_Efficient_Low-Bit_Face_Recognition_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_Rotation_Consistent_Margin_Loss_for_Efficient_Low-Bit_Face_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_Rotation_Consistent_Margin_Loss_for_Efficient_Low-Bit_Face_Recognition_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation | Jianqiang Wan, Yang Liu, Donglai Wei, Xiang Bai, Yongchao Xu | Image segmentation is a fundamental vision task and still remains a crucial step for many applications. In this paper, we propose a fast image segmentation method based on a novel super boundary-to-pixel direction (super-BPD) and a customized segmentation algorithm with super-BPD. Precisely, we define BPD on each pixel as a two-dimensional unit vector pointing from its nearest boundary to the pixel. In the BPD, nearby pixels from different regions have opposite directions departing from each other, and nearby pixels in the same region have directions pointing to the other or each other (i.e., around medial points). We make use of such property to partition image into super-BPDs, which are novel informative superpixels with robust direction similarity for fast grouping into segmentation regions. Extensive experimental results on BSDS500 and Pascal Context demonstrate the accuracy and efficiency of the proposed super-BPD in segmenting images. Specifically, we achieve comparable or superior performance with MCG while running at 25fps vs 0.07fps. Super-BPD also exhibits a noteworthy transferability to unseen scenes. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wan_Super-BPD_Super_Boundary-to-Pixel_Direction_for_Fast_Image_Segmentation_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wan_Super-BPD_Super_Boundary-to-Pixel_Direction_for_Fast_Image_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wan_Super-BPD_Super_Boundary-to-Pixel_Direction_for_Fast_Image_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting | Zhuoqian Yang, Wentao Zhu, Wayne Wu, Chen Qian, Qiang Zhou, Bolei Zhou, Chen Change Loy | We present a lightweight video motion retargeting approach TransMoMo that is capable of transferring motion of a person in a source video realistically to another video of a target person. Without using any paired data for supervision, the proposed method can be trained in an unsupervised manner by exploiting invariance properties of three orthogonal factors of variation including motion, structure, and view-angle. Specifically, with loss functions carefully derived based on invariance, we train an auto-encoder to disentangle the latent representations of such factors given the source and target video clips. This allows us to selectively transfer motion extracted from the source video seamlessly to the target video in spite of structural and view-angle disparities between the source and the target. The relaxed assumption of paired data allows our method to be trained on a vast amount of videos needless of manual annotation of source-target pairing, leading to improved robustness against large structural variations and extreme motion in videos. We demonstrate the effectiveness of our method over the state-of-the-art methods. Code, model and data are publicly available on our project page (https://yzhq97.github.io/transmomo). | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_TransMoMo_Invariance-Driven_Unsupervised_Video_Motion_Retargeting_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.14401 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_TransMoMo_Invariance-Driven_Unsupervised_Video_Motion_Retargeting_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_TransMoMo_Invariance-Driven_Unsupervised_Video_Motion_Retargeting_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yang_TransMoMo_Invariance-Driven_Unsupervised_CVPR_2020_supplemental.pdf | null | null |
D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features | Xuyang Bai, Zixin Luo, Lei Zhou, Hongbo Fu, Long Quan, Chiew-Lan Tai | A successful point cloud registration often lies on robust establishment of sparse matches through discriminative 3D local features. Despite the fast evolution of learning-based 3D feature descriptors, little attention has been drawn to the learning of 3D feature detectors, even less for a joint learning of the two tasks. In this paper, we leverage a 3D fully convolutional network for 3D point clouds, and propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point. In particular, we propose a keypoint selection strategy that overcomes the inherent density variations of 3D point clouds, and further propose a self-supervised detector loss guided by the on-the-fly feature matching results during training. Finally, our method achieves state-of-the-art results in both indoor and outdoor scenarios, evaluated on 3DMatch and KITTI datasets, and shows its strong generalization ability on the ETH dataset. Towards practical use, we show that by adopting a reliable feature detector, sampling a smaller number of features is sufficient to achieve accurate and fast point cloud alignment. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Bai_D3Feat_Joint_Learning_of_Dense_Detection_and_Description_of_3D_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.03164 | https://www.youtube.com/watch?v=Q_jQVJ3ANFI | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Bai_D3Feat_Joint_Learning_of_Dense_Detection_and_Description_of_3D_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Bai_D3Feat_Joint_Learning_of_Dense_Detection_and_Description_of_3D_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Bai_D3Feat_Joint_Learning_CVPR_2020_supplemental.pdf | null | null |
Cross-Batch Memory for Embedding Learning | Xun Wang, Haozhi Zhang, Weilin Huang, Matthew R. Scott | Mining informative negative instances are of central importance to deep metric learning (DML). However, the hard-mining ability of existing DML methods is intrinsically limited by mini-batch training, where only a mini-batch of instances are accessible at each iteration. In this paper, we identify a "slow drift" phenomena by observing that the embedding features drift exceptionally slow even as the model parameters are updating throughout the training process. It suggests that the features of instances computed at preceding iterations can considerably approximate to their features extracted by current model. We propose a cross-batch memory (XBM) mechanism that memorizes the embeddings of past iterations, allowing the model to collect sufficient hard negative pairs across multiple mini-batches - even over the whole dataset. Our XBM can be directly integrated into general pair-based DML framework.We demonstrate that, without bells and whistles, XBM augmented DML can boost the performance considerably on image retrieval. In particular, with XBM, a simple contrastive loss can have large R@1 improvements of 12%-22.5% on three large-scale datasets, easily surpassing the most sophisticated state-of-the-art methods [38, 27, 2], by a large margin. Our XBM is conceptually simple, easy to implement - using several lines of codes, and is memory efficient - with a negligible 0.2 GB extra GPU memory. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Cross-Batch_Memory_for_Embedding_Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.06798 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Cross-Batch_Memory_for_Embedding_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Cross-Batch_Memory_for_Embedding_Learning_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wang_Cross-Batch_Memory_for_CVPR_2020_supplemental.pdf | null | null |
Hierarchical Pyramid Diverse Attention Networks for Face Recognition | Qiangchang Wang, Tianyi Wu, He Zheng, Guodong Guo | Deep learning has achieved a great success in face recognition (FR), however, few existing models take hierarchical multi-scale local features into consideration. In this work, we propose a hierarchical pyramid diverse attention (HPDA) network. First, it is observed that local patches would play important roles in FR when the global face appearance changes dramatically. Some recent works apply attention modules to locate local patches automatically without relying on face landmarks. Unfortunately, without considering diversity, some learned attentions tend to have redundant responses around some similar local patches, while neglecting other potential discriminative facial parts. Meanwhile, local patches may appear at different scales due to pose variations or large expression changes. To alleviate these challenges, we propose a pyramid diverse attention (PDA) to learn multi-scale diverse local representations automatically and adaptively. More specifically, a pyramid attention is developed to capture multi-scale features. Meanwhile, a diverse learning is developed to encourage models to focus on different local patches and generate diverse local features. Second, almost all existing models focus on extracting features from the last convolutional layer, lacking of local details or small-scale face parts in lower layers. Instead of simple concatenation or addition, we propose to use a hierarchical bilinear pooling (HBP) to fuse information from multiple layers effectively. Thus, the HPDA is developed by integrating the PDA into the HBP. Experimental results on several datasets show the effectiveness of the HPDA, compared to the state-of-the-art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Hierarchical_Pyramid_Diverse_Attention_Networks_for_Face_Recognition_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Hierarchical_Pyramid_Diverse_Attention_Networks_for_Face_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Hierarchical_Pyramid_Diverse_Attention_Networks_for_Face_Recognition_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
ARShadowGAN: Shadow Generative Adversarial Network for Augmented Reality in Single Light Scenes | Daquan Liu, Chengjiang Long, Hongpan Zhang, Hanning Yu, Xinzhi Dong, Chunxia Xiao | Generating virtual object shadows consistent with the real-world environment shading effects is important but challenging in computer vision and augmented reality applications. To address this problem, we propose an end-to-end Generative Adversarial Network for shadow generation named ARShadowGAN for augmented reality in single light scenes. Our ARShadowGAN makes full use of attention mechanism and is able to directly model the mapping relation between the virtual object shadow and the real-world environment without any explicit estimation of the illumination and 3D geometric information. In addition, we collect an image set which provides rich clues for shadow generation and construct a dataset for training and evaluating our proposed ARShadowGAN. The extensive experimental results show that our proposed ARShadowGAN is capable of directly generating plausible virtual object shadows in single light scenes. Our source code is available at https://github.com/ldq9526/ARShadowGAN. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_ARShadowGAN_Shadow_Generative_Adversarial_Network_for_Augmented_Reality_in_Single_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=kGRFptAdnJM | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_ARShadowGAN_Shadow_Generative_Adversarial_Network_for_Augmented_Reality_in_Single_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_ARShadowGAN_Shadow_Generative_Adversarial_Network_for_Augmented_Reality_in_Single_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Liu_ARShadowGAN_Shadow_Generative_CVPR_2020_supplemental.pdf | null | null |
Going Deeper With Lean Point Networks | Eric-Tuan Le, Iasonas Kokkinos, Niloy J. Mitra | In this work we introduce Lean Point Networks (LPNs) to train deeper and more accurate point processing networks by relying on three novel point processing blocks that improve memory consumption, inference time, and accuracy: a convolution-type block for point sets that blends neighborhood information in a memory-efficient manner; a crosslink block that efficiently shares information across low- and high-resolution processing branches; and a multi-resolution point cloud processing block for faster diffusion of information. By combining these blocks, we design wider and deeper point-based architectures. We report systematic accuracy and memory consumption improvements on multiple publicly available segmentation tasks by using our generic modules as drop-in replacements for the blocks of multiple architectures (PointNet++, DGCNN, SpiderNet, PointCNN). | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Le_Going_Deeper_With_Lean_Point_Networks_CVPR_2020_paper.pdf | http://arxiv.org/abs/1907.00960 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Le_Going_Deeper_With_Lean_Point_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Le_Going_Deeper_With_Lean_Point_Networks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Le_Going_Deeper_With_CVPR_2020_supplemental.pdf | null | null |
Semantic Image Manipulation Using Scene Graphs | Helisa Dhamo, Azade Farshad, Iro Laina, Nassir Navab, Gregory D. Hager, Federico Tombari, Christian Rupprecht | Image manipulation can be considered a special case of image generation where the image to be produced is a modification of an existing image. Image generation and manipulation have been, for the most part, tasks that operate on raw pixels. However, the remarkable progress in learning rich image and object representations has opened the way for tasks such as text-to-image or layout-to-image generation that are mainly driven by semantics. In our work, we address the novel problem of image manipulation from scene graphs, in which a user can edit images by merely applying changes in the nodes or edges of a semantic graph that is generated from the image. Our goal is to encode image information in a given constellation and from there on generate new constellations, such as replacing objects or even changing relationships between objects, while respecting the semantics and style from the original image. We introduce a spatio-semantic scene graph network that does not require direct supervision for constellation changes or image edits. This makes it possible to train the system from existing real-world datasets with no additional annotation effort. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Dhamo_Semantic_Image_Manipulation_Using_Scene_Graphs_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.03677 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Dhamo_Semantic_Image_Manipulation_Using_Scene_Graphs_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Dhamo_Semantic_Image_Manipulation_Using_Scene_Graphs_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Dhamo_Semantic_Image_Manipulation_CVPR_2020_supplemental.pdf | null | null |
Neural Voxel Renderer: Learning an Accurate and Controllable Rendering Tool | Konstantinos Rematas, Vittorio Ferrari | We present a neural rendering framework that maps a voxelized scene into a high quality image. Highly-textured objects and scene element interactions are realistically rendered by our method, despite having a rough representation as an input. Moreover, our approach allows controllable rendering: geometric and appearance modifications in the input are accurately propagated to the output. The user can move, rotate and scale an object, change its appearance and texture or modify the position of the light and all these edits are represented in the final rendering. We demonstrate the effectiveness of our approach by rendering scenes with varying appearance, from single color per object to complex, high-frequency textures. We show that our rerendering network can generate very detailed images that represent precisely the appearance of the input scene. Our experiments illustrate that our approach achieves more accurate image synthesis results compared to alternatives and can also handle low voxel grid resolutions. Finally, we show how our neural rendering framework can capture and faithfully render objects from real images and from a diverse set of classes. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Rematas_Neural_Voxel_Renderer_Learning_an_Accurate_and_Controllable_Rendering_Tool_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.04591 | https://www.youtube.com/watch?v=CeGAtD3qqdM | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Rematas_Neural_Voxel_Renderer_Learning_an_Accurate_and_Controllable_Rendering_Tool_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Rematas_Neural_Voxel_Renderer_Learning_an_Accurate_and_Controllable_Rendering_Tool_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
How to Train Your Deep Multi-Object Tracker | Yihong Xu, Aljosa Osep, Yutong Ban, Radu Horaud, Laura Leal-Taixe, Xavier Alameda-Pineda | The recent trend in vision-based multi-object tracking (MOT) is heading towards leveraging the representational power of deep learning to jointly learn to detect and track objects. However, existing methods train only certain sub-modules using loss functions that often do not correlate with established tracking evaluation measures such as Multi-Object Tracking Accuracy (MOTA) and Precision (MOTP). As these measures are not differentiable, the choice of appropriate loss functions for end-to-end training of multi-object tracking methods is still an open research problem. In this paper, we bridge this gap by proposing a differentiable proxy of MOTA and MOTP, which we combine in a loss function suitable for end-to-end training of deep multi-object trackers. As a key ingredient, we propose a Deep Hungarian Net (DHN) module that approximates the Hungarian matching algorithm. DHN allows estimating the correspondence between object tracks and ground truth objects to compute differentiable proxies of MOTA and MOTP, which are in turn used to optimize deep trackers directly. We experimentally demonstrate that the proposed differentiable framework improves the performance of existing multi-object trackers, and we establish a new state of the art on the MOTChallenge benchmark. Our code is publicly available from https://github.com/yihongXU/deepMOT. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xu_How_to_Train_Your_Deep_Multi-Object_Tracker_CVPR_2020_paper.pdf | http://arxiv.org/abs/1906.06618 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_How_to_Train_Your_Deep_Multi-Object_Tracker_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_How_to_Train_Your_Deep_Multi-Object_Tracker_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Xu_How_to_Train_CVPR_2020_supplemental.pdf | null | null |
Cascaded Deep Monocular 3D Human Pose Estimation With Evolutionary Training Data | Shichao Li, Lei Ke, Kevin Pratama, Yu-Wing Tai, Chi-Keung Tang, Kwang-Ting Cheng | End-to-end deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation, yet these models may fail for unseen poses with limited and fixed training data. This paper proposes a novel data augmentation method that: (1) is scalable for synthesizing massive amount of training data (over 8 million valid 3D human poses with corresponding 2D projections) for training 2D-to-3D networks, (2) can effectively reduce dataset bias. Our method evolves a limited dataset to synthesize unseen 3D human skeletons based on a hierarchical human representation and heuristics inspired by prior knowledge. Extensive experiments show that our approach not only achieves state-of-the-art accuracy on the largest public benchmark, but also generalizes significantly better to unseen and rare poses. Relevant files and tools are available at the project website. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Cascaded_Deep_Monocular_3D_Human_Pose_Estimation_With_Evolutionary_Training_CVPR_2020_paper.pdf | http://arxiv.org/abs/2006.07778 | https://www.youtube.com/watch?v=erYymlWw2bo | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Cascaded_Deep_Monocular_3D_Human_Pose_Estimation_With_Evolutionary_Training_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Cascaded_Deep_Monocular_3D_Human_Pose_Estimation_With_Evolutionary_Training_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Cascaded_Deep_Monocular_CVPR_2020_supplemental.zip | null | null |
An End-to-End Edge Aggregation Network for Moving Object Segmentation | Prashant W. Patil, Kuldeep M. Biradar, Akshay Dudhane, Subrahmanyam Murala | Moving object segmentation in videos (MOS) is a highly demanding task for security-based applications like automated outdoor video surveillance. Most of the existing techniques proposed for MOS are highly depend on fine-tuning a model on the first frame(s) of test sequence or complicated training procedure, which leads to limited practical serviceability of the algorithm. In this paper, the inherent correlation learning-based edge extraction mechanism (EEM) and dense residual block (DRB) are proposed for the discriminative foreground representation. The multi-scale EEM module provides the efficient foreground edge related information (with the help of encoder) to the decoder through skip connection at subsequent scale. Further, the response of the optical flow encoder stream and the last EEM module are embedded in the bridge network. The bridge network comprises of multi-scale residual blocks with dense connections to learn the effective and efficient foreground relevant features. Finally, to generate accurate and consistent foreground object maps, a decoder block is proposed with skip connections from respective multi-scale EEM module feature maps and the subsequent down-sampled response of previous frame output. Specifically, the proposed network does not require any pre-trained models or fine-tuning of the parameters with the initial frame(s) of the test video. The performance of the proposed network is evaluated with different configurations like disjoint, cross-data, and global training-testing techniques. The ablation study is conducted to analyse each model of the proposed network. To demonstrate the effectiveness of the proposed framework, a comprehensive analysis on four benchmark video datasets is conducted. Experimental results show that the proposed approach outperforms the state-of-the-art methods for MOS. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Patil_An_End-to-End_Edge_Aggregation_Network_for_Moving_Object_Segmentation_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Patil_An_End-to-End_Edge_Aggregation_Network_for_Moving_Object_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Patil_An_End-to-End_Edge_Aggregation_Network_for_Moving_Object_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Overcoming Multi-Model Forgetting in One-Shot NAS With Diversity Maximization | Miao Zhang, Huiqi Li, Shirui Pan, Xiaojun Chang, Steven Su | One-Shot Neural Architecture Search (NAS) significantly improves the computational efficiency through weight sharing. However, this approach also introduces multi-model forgetting during the supernet training (architecture search phase), where the performance of previous architectures degrade when sequentially training new architectures with partially-shared weights. To overcome such catastrophic forgetting, the state-of-the-art method assumes that the shared weights are optimal when jointly optimizing a posterior probability. However, this strict assumption is not necessarily held for One-Shot NAS in practice. In this paper, we formulate the supernet training in the One-Shot NAS as a constrained optimization problem of continual learning that the learning of current architecture should not degrade the performance of previous architectures during the supernet training. We propose a Novelty Search based Architecture Selection (NSAS) loss function and demonstrate that the posterior probability could be calculated without the strict assumption when maximizing the diversity of the selected constraints. A greedy novelty search method is devised to find the most representative subset to regularize the supernet training. We apply our proposed approach to two One-Shot NAS baselines, random sampling NAS (RandomNAS) and gradient-based sampling NAS (GDAS). Extensive experiments demonstrate that our method enhances the predictive ability of the supernet in One-Shot NAS and achieves remarkable performance on CIFAR-10, CIFAR-100, and PTB with efficiency. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Overcoming_Multi-Model_Forgetting_in_One-Shot_NAS_With_Diversity_Maximization_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=b_0W9Ud895M | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Overcoming_Multi-Model_Forgetting_in_One-Shot_NAS_With_Diversity_Maximization_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Overcoming_Multi-Model_Forgetting_in_One-Shot_NAS_With_Diversity_Maximization_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Fine-Grained Image-to-Image Transformation Towards Visual Recognition | Wei Xiong, Yutong He, Yixuan Zhang, Wenhan Luo, Lin Ma, Jiebo Luo | Existing image-to-image transformation approaches primarily focus on synthesizing visually pleasing data. Generating images with correct identity labels is challenging yet much less explored. It is even more challenging to deal with image transformation tasks with large deformation in poses, viewpoints, or scales while preserving the identity, such as face rotation and object viewpoint morphing. In this paper, we aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image, which can thereby benefit the subsequent fine-grained image recognition and few-shot learning tasks. The generated images, transformed with large geometric deformation, do not necessarily need to be of high visual quality but are required to maintain as much identity information as possible. To this end, we adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image. In order to preserve the fine-grained contextual details of the input image during the deformable transformation, a constrained nonalignment connection method is proposed to construct learnable highways between intermediate convolution blocks in the generator. Moreover, an adaptive identity modulation mechanism is proposed to transfer the identity information into the output image effectively. Extensive experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models, and as a result significantly boosts the visual recognition performance in fine-grained few-shot learning. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xiong_Fine-Grained_Image-to-Image_Transformation_Towards_Visual_Recognition_CVPR_2020_paper.pdf | http://arxiv.org/abs/2001.03856 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xiong_Fine-Grained_Image-to-Image_Transformation_Towards_Visual_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xiong_Fine-Grained_Image-to-Image_Transformation_Towards_Visual_Recognition_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Xiong_Fine-Grained_Image-to-Image_Transformation_CVPR_2020_supplemental.pdf | null | null |
Self-Supervised Learning of Pretext-Invariant Representations | Ishan Misra, Laurens van der Maaten | The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations. Many pretext tasks lead to representations that are covariant with image transformations. We argue that, instead, semantic representations ought to be invariant under such transformations. Specifically, we develop Pretext-Invariant Representation Learning (PIRL, pronounced as `pearl') that learns invariant representations based on pretext tasks. We use PIRL with a commonly used pretext task that involves solving jigsaw puzzles. We find that PIRL substantially improves the semantic quality of the learned image representations. Our approach sets a new state-of-the-art in self-supervised learning from images on several popular benchmarks for self-supervised learning. Despite being unsupervised, PIRL outperforms supervised pre-training in learning image representations for object detection. Altogether, our results demonstrate the potential of self-supervised representations with good invariance properties. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Misra_Self-Supervised_Learning_of_Pretext-Invariant_Representations_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.01991 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Misra_Self-Supervised_Learning_of_Pretext-Invariant_Representations_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Misra_Self-Supervised_Learning_of_Pretext-Invariant_Representations_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
HyperSTAR: Task-Aware Hyperparameters for Deep Networks | Gaurav Mittal, Chang Liu, Nikolaos Karianakis, Victor Fragoso, Mei Chen, Yun Fu | While deep neural networks excel in solving visual recognition tasks, they require significant effort to find hyperparameters that make them work optimally. Hyperparameter Optimization (HPO) approaches have automated the process of finding good hyperparameters but they do not adapt to a given task (task-agnostic), making them computationally inefficient. To reduce HPO time, we present HyperSTAR (System for Task Aware Hyperparameter Recommendation), a task-aware method to warm-start HPO for deep neural networks. HyperSTAR ranks and recommends hyperparameters by predicting their performance conditioned on a joint dataset-hyperparameter space. It learns a dataset (task) representation along with the performance predictor directly from raw images in an end-to-end fashion. The recommendations, when integrated with an existing HPO method, make it task-aware and significantly reduce the time to achieve optimal performance. We conduct extensive experiments on 10 publicly available large-scale image classification datasets over two different network architectures, validating that HyperSTAR evaluates 50% less configurations to achieve the best performance compared to existing methods. We further demonstrate that HyperSTAR makes Hyperband (HB) task-aware, achieving the optimal accuracy in just 25% of the budget required by both vanilla HB and Bayesian Optimized HB (BOHB). | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Mittal_HyperSTAR_Task-Aware_Hyperparameters_for_Deep_Networks_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.10524 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Mittal_HyperSTAR_Task-Aware_Hyperparameters_for_Deep_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Mittal_HyperSTAR_Task-Aware_Hyperparameters_for_Deep_Networks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Mittal_HyperSTAR_Task-Aware_Hyperparameters_CVPR_2020_supplemental.pdf | null | null |
Deblurring Using Analysis-Synthesis Networks Pair | Adam Kaufman, Raanan Fattal | Blind image deblurring remains a challenging problem for modern artificial neural networks. Unlike other image restoration problems, deblurring networks fail behind the performance of existing deblurring algorithms in case of uniform and 3D blur models. This follows from the diverse and profound effect that the unknown blur-kernel has on the deblurring operator. We propose a new architecture which breaks the deblurring network into an analysis network which estimates the blur, and a synthesis network that uses this kernel to deblur the image. Unlike existing deblurring networks, this design allows us to explicitly incorporate the blur-kernel in the network's training. In addition, we introduce new cross-correlation layers that allow better blur estimations, as well as unique components that allow the estimate blur to control the action of the synthesis deblurring action. Evaluating the new approach over established benchmark datasets shows its ability to achieve state-of-the-art deblurring accuracy on various tests, as well as offer a major speedup in runtime. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kaufman_Deblurring_Using_Analysis-Synthesis_Networks_Pair_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.02956 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kaufman_Deblurring_Using_Analysis-Synthesis_Networks_Pair_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kaufman_Deblurring_Using_Analysis-Synthesis_Networks_Pair_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Kaufman_Deblurring_Using_Analysis-Synthesis_CVPR_2020_supplemental.pdf | null | null |
A Novel Recurrent Encoder-Decoder Structure for Large-Scale Multi-View Stereo Reconstruction From an Open Aerial Dataset | Jin Liu, Shunping Ji | A great deal of research has demonstrated recently that multi-view stereo (MVS) matching can be solved with deep learning methods. However, these efforts were focused on close-range objects and only a very few of the deep learning-based methods were specifically designed for large-scale 3D urban reconstruction due to the lack of multi-view aerial image benchmarks. In this paper, we present a synthetic aerial dataset, called the WHU dataset, we created for MVS tasks, which, to our knowledge, is the first large-scale multi-view aerial dataset. It was generated from a highly accurate 3D digital surface model produced from thousands of real aerial images with precise camera parameters. We also introduce in this paper a novel network, called RED-Net, for wide-range depth inference, which we developed from a recurrent encoder-decoder structure to regularize cost maps across depths and a 2D fully convolutional network as framework. RED-Net's low memory requirements and high performance make it suitable for large-scale and highly accurate 3D Earth surface reconstruction. Our experiments confirmed that not only did our method exceed the current state-of-the-art MVS methods by more than 50% mean absolute error (MAE) with less memory and computational cost, but its efficiency as well. It outperformed one of the best commercial software programs based on conventional methods, improving their efficiency 16 times over. Moreover, we proved that our RED-Net model pre-trained on the synthetic WHU dataset can be efficiently transferred to very different multi-view aerial image datasets without any fine-tuning. Dataset and code are available at http://gpcv.whu.edu.cn/data. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_A_Novel_Recurrent_Encoder-Decoder_Structure_for_Large-Scale_Multi-View_Stereo_Reconstruction_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.00637 | https://www.youtube.com/watch?v=gZds3nKoPR8 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_A_Novel_Recurrent_Encoder-Decoder_Structure_for_Large-Scale_Multi-View_Stereo_Reconstruction_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_A_Novel_Recurrent_Encoder-Decoder_Structure_for_Large-Scale_Multi-View_Stereo_Reconstruction_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Deep Polarization Cues for Transparent Object Segmentation | Agastya Kalra, Vage Taamazyan, Supreeth Krishna Rao, Kartik Venkataraman, Ramesh Raskar, Achuta Kadambi | Segmentation of transparent objects is a hard, open problem in computer vision. Transparent objects lack texture of their own, adopting instead the texture of scene background. This paper reframes the problem of transparent object segmentation into the realm of light polarization, i.e., the rotation of light waves. We use a polarization camera to capture multi-modal imagery and couple this with a unique deep learning backbone for processing polarization input data. Our method achieves instance segmentation on cluttered, transparent objects in various scene and background conditions, demonstrating an improvement over traditional image-based approaches. As an application we use this for robotic bin picking of transparent objects. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kalra_Deep_Polarization_Cues_for_Transparent_Object_Segmentation_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kalra_Deep_Polarization_Cues_for_Transparent_Object_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kalra_Deep_Polarization_Cues_for_Transparent_Object_Segmentation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Kalra_Deep_Polarization_Cues_CVPR_2020_supplemental.pdf | null | null |
GAN Compression: Efficient Architectures for Interactive Conditional GANs | Muyang Li, Ji Lin, Yaoyao Ding, Zhijian Liu, Jun-Yan Zhu, Song Han | Conditional Generative Adversarial Networks (cGANs) have enabled controllable image synthesis for many computer vision and graphics applications. However, recent cGANs are 1-2 orders of magnitude more computationally-intensive than modern recognition CNNs. For example, GauGAN consumes 281G MACs per image, compared to 0.44G MACs for MobileNet-v3, making it difficult for interactive deployment. In this work, we propose a general-purpose compression framework for reducing the inference time and model size of the generator in cGANs. Directly applying existing CNNs compression methods yields poor performance due to the difficulty of GAN training and the differences in generator architectures. We address these challenges in two ways. First, to stabilize the GAN training, we transfer knowledge of multiple intermediate representations of the original model to its compressed model, and unify unpaired and paired learning. Second, instead of reusing existing CNN designs, our method automatically finds efficient architectures via neural architecture search (NAS). To accelerate the search process, we decouple the model training and architecture search via weight sharing. Experiments demonstrate the effectiveness of our method across different supervision settings (paired and unpaired), model architectures, and learning methods (e.g., pix2pix, GauGAN, CycleGAN). Without losing image quality, we reduce the computation of CycleGAN by more than 20x and GauGAN by 9x, paving the way for interactive image synthesis. The code and demo are publicly available. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_GAN_Compression_Efficient_Architectures_for_Interactive_Conditional_GANs_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.08936 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_GAN_Compression_Efficient_Architectures_for_Interactive_Conditional_GANs_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_GAN_Compression_Efficient_Architectures_for_Interactive_Conditional_GANs_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.