Search is not available for this dataset
title
string | authors
string | abstract
string | pdf
string | arXiv
string | video
string | bibtex
string | url
string | detail_url
string | tags
string | supp
string | dataset
string | string |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Joint Training of Variational Auto-Encoder and Latent Energy-Based Model | Tian Han, Erik Nijkamp, Linqi Zhou, Bo Pang, Song-Chun Zhu, Ying Nian Wu | This paper proposes a joint training method to learn both the variational auto-encoder (VAE) and the latent energy-based model (EBM). The joint training of VAE and latent EBM are based on an objective function that consists of three Kullback-Leibler divergences between three joint distributions on the latent vector and the image, and the objective function is of an elegant symmetric and anti-symmetric form of divergence triangle that seamlessly integrates variational and adversarial learning. In this joint training scheme, the latent EBM serves as a critic of the generator model, while the generator model and the inference model in VAE serve as the approximate synthesis sampler and inference sampler of the latent EBM. Our experiments show that the joint training greatly improves the synthesis quality of the VAE. It also enables learning of an energy function that is capable of detecting out of sample examples for anomaly detection. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Han_Joint_Training_of_Variational_Auto-Encoder_and_Latent_Energy-Based_Model_CVPR_2020_paper.pdf | http://arxiv.org/abs/2006.06059 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Han_Joint_Training_of_Variational_Auto-Encoder_and_Latent_Energy-Based_Model_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Han_Joint_Training_of_Variational_Auto-Encoder_and_Latent_Energy-Based_Model_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Data-Efficient Semi-Supervised Learning by Reliable Edge Mining | Peibin Chen, Tao Ma, Xu Qin, Weidi Xu, Shuchang Zhou | Learning powerful discriminative features is a challenging task in Semi-Supervised Learning, as the estimation of the feature space is more likely to be wrong with scarcer labeled data. Previous methods utilize a relation graph with edges representing 'similarity' or 'dissimilarity' between nodes. Similar nodes are forced to output consistent features, while dissimilar nodes are forced to be inconsistent. However, since unlabeled data may be wrongly labeled, the judgment of edges may be unreliable. Besides, the nodes connected by edges may already be well fitted, thus contributing little to the model training. We propose Reliable Edge Mining (REM), which forms a reliable graph by only selecting reliable and useful edges. Guided by the graph, the feature extractor is able to learn discriminative features in a data-efficient way, and consequently boosts the accuracy of the learned classifier. Visual analyses show that the features learned are more discriminative and better reveals the underlying structure of the data. REM can be combined with perturbation-based methods like Pi-model, TempEns and Mean Teacher to further improve accuracy. Experiments prove that our method is data-efficient on simple tasks like SVHN and CIFAR-10, and achieves state-of-the-art results on the challenging CIFAR-100. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chen_Data-Efficient_Semi-Supervised_Learning_by_Reliable_Edge_Mining_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_Data-Efficient_Semi-Supervised_Learning_by_Reliable_Edge_Mining_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_Data-Efficient_Semi-Supervised_Learning_by_Reliable_Edge_Mining_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Stylization-Based Architecture for Fast Deep Exemplar Colorization | Zhongyou Xu, Tingting Wang, Faming Fang, Yun Sheng, Guixu Zhang | Exemplar-based colorization aims to add colors to a grayscale image guided by a content related reference im- age. Existing methods are either sensitive to the selection of reference images (content, position) or extremely time and resource consuming, which limits their practical applica- tion. To tackle these problems, we propose a deep exemplar colorization architecture inspired by the characteristics of stylization in feature extracting and blending. Our coarse- to-fine architecture consists of two parts: a fast transfer sub-net and a robust colorization sub-net. The transfer sub- net obtains a coarse chrominance map via matching basic feature statistics of the input pairs in a progressive way. The colorization sub-net refines the map to generate the final re- sults. The proposed end-to-end network can jointly learn faithful colorization with a related reference and plausible color prediction with unrelated reference. Extensive exper- imental validation demonstrates that our approach outper- forms the state-of-the-art methods in less time whether in exemplar-based colorization or image stylization tasks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xu_Stylization-Based_Architecture_for_Fast_Deep_Exemplar_Colorization_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Stylization-Based_Architecture_for_Fast_Deep_Exemplar_Colorization_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Stylization-Based_Architecture_for_Fast_Deep_Exemplar_Colorization_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Xu_Stylization-Based_Architecture_for_CVPR_2020_supplemental.pdf | null | null |
PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer | Wentao Jiang, Si Liu, Chen Gao, Jie Cao, Ran He, Jiashi Feng, Shuicheng Yan | In this paper, we address the makeup transfer task, which aims to transfer the makeup from a reference image to a source image. Existing methods have achieved promising progress in constrained scenarios, but transferring between images with large pose and expression differences is still challenging. Besides, they cannot realize customizable transfer that allows a controllable shade of makeup or specifies the part to transfer, which limits their applications. To address these issues, we propose Pose and expression robust Spatial-aware GAN (PSGAN). It first utilizes Makeup Distill Network to disentangle the makeup of the reference image as two spatial-aware makeup matrices. Then, Attentive Makeup Morphing module is introduced to specify how the makeup of a pixel in the source image is morphed from the reference image. With the makeup matrices and the source image, Makeup Apply Network is used to perform makeup transfer. Our PSGAN not only achieves state-of-the-art results even when large pose and expression differences exist but also is able to perform partial and shade-controllable makeup transfer. Both the code and a newly collected dataset containing facial images with various poses and expressions will be available at https://github.com/wtjiang98/PSGAN. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jiang_PSGAN_Pose_and_Expression_Robust_Spatial-Aware_GAN_for_Customizable_Makeup_CVPR_2020_paper.pdf | http://arxiv.org/abs/1909.06956 | https://www.youtube.com/watch?v=Hb5h3ghQQ10 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_PSGAN_Pose_and_Expression_Robust_Spatial-Aware_GAN_for_Customizable_Makeup_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_PSGAN_Pose_and_Expression_Robust_Spatial-Aware_GAN_for_Customizable_Makeup_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Jiang_PSGAN_Pose_and_CVPR_2020_supplemental.zip | null | null |
Spatial Pyramid Based Graph Reasoning for Semantic Segmentation | Xia Li, Yibo Yang, Qijie Zhao, Tiancheng Shen, Zhouchen Lin, Hong Liu | The convolution operation suffers from a limited receptive filed, while global modeling is fundamental to dense prediction tasks, such as semantic segmentation. In this paper, we apply graph convolution into the semantic segmentation task and propose an improved Laplacian. The graph reasoning is directly performed in the original feature space organized as a spatial pyramid. Different from existing methods, our Laplacian is data-dependent and we introduce an attention diagonal matrix to learn a better distance metric. It gets rid of projecting and re-projecting processes, which makes our proposed method a light-weight module that can be easily plugged into current computer vision architectures. More importantly, performing graph reasoning directly in the feature space retains spatial relationships and makes spatial pyramid possible to explore multiple long-range contextual patterns from different scales. Experiments on Cityscapes, COCO Stuff, PASCAL Context and PASCAL VOC demonstrate the effectiveness of our proposed methods on semantic segmentation. We achieve comparable performance with advantages in computational and memory overhead. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Spatial_Pyramid_Based_Graph_Reasoning_for_Semantic_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.10211 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Spatial_Pyramid_Based_Graph_Reasoning_for_Semantic_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Spatial_Pyramid_Based_Graph_Reasoning_for_Semantic_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
GAMIN: Generative Adversarial Multiple Imputation Network for Highly Missing Data | Seongwook Yoon, Sanghoon Sull | We propose a novel imputation method for highly missing data. Though most existing imputation methods focus on moderate missing rate, imputation for high missing rate over 80% is still important but challenging. As we expect that multiple imputation is indispensable for high missing rate, we propose a generative adversarial multiple imputation network (GAMIN) based on generative adversarial network (GAN) for multiple imputation. Compared with similar imputation methods adopting GAN, our method has three novel contributions: 1)We propose a novel imputation architecture which generates candidates of imputation. 2)We present a confidence prediction method to perform reliable multiple imputation. 3)We realize them with GAMIN and train it using novel loss functions based on the confidence. We synthesized highly missing datasets using MNIST and CelebA to perform various experiments. The results show that our method outperforms baseline methods at high missing rate from 80% to 95%. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yoon_GAMIN_Generative_Adversarial_Multiple_Imputation_Network_for_Highly_Missing_Data_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=v-CzlZv4M-Q | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yoon_GAMIN_Generative_Adversarial_Multiple_Imputation_Network_for_Highly_Missing_Data_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yoon_GAMIN_Generative_Adversarial_Multiple_Imputation_Network_for_Highly_Missing_Data_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
When to Use Convolutional Neural Networks for Inverse Problems | Nathaniel Chodosh, Simon Lucey | Reconstruction tasks in computer vision aim fundamentally to recover an undetermined signal from a set of noisy measurements. Examples include super-resolution, image denoising, and non-rigid structure from motion, all of which have seen recent advancements through deep learning. However, earlier work made extensive use of sparse signal reconstruction frameworks (e.g. convolutional sparse coding). While this work was ultimately surpassed by deep learning, it rested on a much more developed theoretical framework. Recent work by Papyan et. al. provides a bridge between the two approaches by showing how a convolutional neural network (CNN) can be viewed as an approximate solution to a convolutional sparse coding (CSC) problem. In this work we argue that for some types of inverse problems the CNN approximation breaks down leading to poor performance. We argue that for these types of problems the CSC approach should be used instead and validate this argument with empirical evidence. Specifically we identify JPEG artifact reduction and non-rigid trajectory reconstruction as challenging inverse problems for CNNs and demonstrate state of the art performance on them using a CSC method. Furthermore, we offer some practical improvements to this model and its application, and also show how insights from the CSC model can be used to make CNNs effective in tasks where their naive application fails. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chodosh_When_to_Use_Convolutional_Neural_Networks_for_Inverse_Problems_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.13820 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chodosh_When_to_Use_Convolutional_Neural_Networks_for_Inverse_Problems_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chodosh_When_to_Use_Convolutional_Neural_Networks_for_Inverse_Problems_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Chodosh_When_to_Use_CVPR_2020_supplemental.pdf | null | null |
Dynamic Face Video Segmentation via Reinforcement Learning | Yujiang Wang, Mingzhi Dong, Jie Shen, Yang Wu, Shiyang Cheng, Maja Pantic | For real-time semantic video segmentation, most recent works utilised a dynamic framework with a key scheduler to make online key/non-key decisions. Some works used a fixed key scheduling policy, while others proposed adaptive key scheduling methods based on heuristic strategies, both of which may lead to suboptimal global performance. To overcome this limitation, we model the online key decision process in dynamic video segmentation as a deep reinforcement learning problem and learn an efficient and effective scheduling policy from expert information about decision history and from the process of maximising global return. Moreover, we study the application of dynamic video segmentation on face videos, a field that has not been investigated before. By evaluating on the 300VW dataset, we show that the performance of our reinforcement key scheduler outperforms that of various baselines in terms of both effective key selections and running speed. Further results on the Cityscapes dataset demonstrate that our proposed method can also generalise to other scenarios. To the best of our knowledge, this is the first work to use reinforcement learning for online key-frame decision in dynamic video segmentation, and also the first work on its application on face videos. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Dynamic_Face_Video_Segmentation_via_Reinforcement_Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/1907.01296 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Dynamic_Face_Video_Segmentation_via_Reinforcement_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Dynamic_Face_Video_Segmentation_via_Reinforcement_Learning_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wang_Dynamic_Face_Video_CVPR_2020_supplemental.zip | null | null |
ManiGAN: Text-Guided Image Manipulation | Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H.S. Torr | The goal of our paper is to semantically edit parts of an image matching a given text that describes desired attributes (e.g., texture, colour, and background), while preserving other contents that are irrelevant to the text. To achieve this, we propose a novel generative adversarial network (ManiGAN), which contains two key components: text-image affine combination module (ACM) and detail correction module (DCM). The ACM selects image regions relevant to the given text and then correlates the regions with corresponding semantic words for effective manipulation. Meanwhile, it encodes original image features to help reconstruct text-irrelevant contents. The DCM rectifies mismatched attributes and completes missing contents of the synthetic image. Finally, we suggest a new metric for evaluating image manipulation results, in terms of both the generation of new attributes and the reconstruction of text-irrelevant contents. Extensive experiments on the CUB and COCO datasets demonstrate the superior performance of the proposed method. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_ManiGAN_Text-Guided_Image_Manipulation_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.06203 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_ManiGAN_Text-Guided_Image_Manipulation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_ManiGAN_Text-Guided_Image_Manipulation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_ManiGAN_Text-Guided_Image_CVPR_2020_supplemental.pdf | null | null |
The GAN That Warped: Semantic Attribute Editing With Unpaired Data | Garoe Dorta, Sara Vicente, Neill D. F. Campbell, Ivor J. A. Simpson | Deep neural networks have recently been used to edit images with great success, in particular for faces. However, they are often limited to only being able to work at a restricted range of resolutions. Many methods are so flexible that face edits can often result in an unwanted loss of identity. This work proposes to learn how to perform semantic image edits through the application of smooth warp fields. Previous approaches that attempted to use warping for semantic edits required paired data, i.e. example images of the same subject with different semantic attributes. In contrast, we employ recent advances in Generative Adversarial Networks that allow our model to be trained with unpaired data. We demonstrate face editing at very high resolutions (4k images) with a single forward pass of a deep network at a lower resolution. We also show that our edits are substantially better at preserving the subject's identity. The robustness of our approach is demonstrated by showing plausible image editing results on the Cub200 birds dataset. To our knowledge this has not been previously accomplished, due the challenging nature of the dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Dorta_The_GAN_That_Warped_Semantic_Attribute_Editing_With_Unpaired_Data_CVPR_2020_paper.pdf | http://arxiv.org/abs/1811.12784 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Dorta_The_GAN_That_Warped_Semantic_Attribute_Editing_With_Unpaired_Data_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Dorta_The_GAN_That_Warped_Semantic_Attribute_Editing_With_Unpaired_Data_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Dorta_The_GAN_That_CVPR_2020_supplemental.zip | null | null |
Detecting Attended Visual Targets in Video | Eunji Chong, Yongxin Wang, Nataniel Ruiz, James M. Rehg | We address the problem of detecting attention targets in video. Our goal is to identify where each person in each frame of a video is looking, and correctly handle the case where the gaze target is out-of-frame. Our novel architecture models the dynamic interaction between the scene and head features and infers time-varying attention targets. We introduce a new annotated dataset, VideoAttentionTarget, containing complex and dynamic patterns of real-world gaze behavior. Our experiments show that our model can effectively infer dynamic attention in videos. In addition, we apply our predicted attention maps to two social gaze behavior recognition tasks, and show that the resulting classifiers significantly outperform existing methods. We achieve state-of-the-art performance on three datasets: GazeFollow (static images), VideoAttentionTarget (videos), and VideoCoAtt (videos), and obtain the first results for automatically classifying clinically-relevant gaze behavior without wearable cameras or eye trackers. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chong_Detecting_Attended_Visual_Targets_in_Video_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.02501 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chong_Detecting_Attended_Visual_Targets_in_Video_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chong_Detecting_Attended_Visual_Targets_in_Video_CVPR_2020_paper.html | CVPR 2020 | null | https://cove.thecvf.com/datasets/345 | null |
Total Deep Variation for Linear Inverse Problems | Erich Kobler, Alexander Effland, Karl Kunisch, Thomas Pock | Diverse inverse problems in imaging can be cast as variational problems composed of a task-specific data fidelity term and a regularization term. In this paper, we propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning. We cast the learning problem as a discrete sampled optimal control problem, for which we derive the adjoint state equations and an optimality condition. By exploiting the variational structure of our approach, we perform a sensitivity analysis with respect to the learned parameters obtained from different training datasets. Moreover, we carry out a nonlinear eigenfunction analysis, which reveals interesting properties of the learned regularizer. We show state-of-the-art performance for classical image restoration and medical image reconstruction problems. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kobler_Total_Deep_Variation_for_Linear_Inverse_Problems_CVPR_2020_paper.pdf | http://arxiv.org/abs/2001.05005 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kobler_Total_Deep_Variation_for_Linear_Inverse_Problems_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kobler_Total_Deep_Variation_for_Linear_Inverse_Problems_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Kobler_Total_Deep_Variation_CVPR_2020_supplemental.pdf | null | null |
Learning Multi-Object Tracking and Segmentation From Automatic Annotations | Lorenzo Porzi, Markus Hofinger, Idoia Ruiz, Joan Serrat, Samuel Rota Bulo, Peter Kontschieder | In this work we contribute a novel pipeline to automatically generate training data, and to improve over state-of-the-art multi-object tracking and segmentation (MOTS) methods. Our proposed track mining algorithm turns raw street-level videos into high-fidelity MOTS training data, is scalable and overcomes the need of expensive and time-consuming manual annotation approaches. We leverage state-of-the-art instance segmentation results in combination with optical flow predictions, also trained on automatically harvested training data. Our second major contribution is MOTSNet - a deep learning, tracking-by-detection architecture for MOTS - deploying a novel mask-pooling layer for improved object association over time. Training MOTSNet with our automatically extracted data leads to significantly improved sMOTSA scores on the novel KITTI MOTS dataset (+1.9%/+7.5% on cars/pedestrians), and MOTSNet improves by +4.1% over previously best methods on the MOTSChallenge dataset. Our most impressive finding is that we can improve over previous best-performing works, even in complete absence of manually annotated MOTS training data. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Porzi_Learning_Multi-Object_Tracking_and_Segmentation_From_Automatic_Annotations_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.02096 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Porzi_Learning_Multi-Object_Tracking_and_Segmentation_From_Automatic_Annotations_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Porzi_Learning_Multi-Object_Tracking_and_Segmentation_From_Automatic_Annotations_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Porzi_Learning_Multi-Object_Tracking_CVPR_2020_supplemental.pdf | null | null |
GeoDA: A Geometric Framework for Black-Box Adversarial Attacks | Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Huaiyu Dai | Adversarial examples are known as carefully perturbed images fooling image classifiers. We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-1 label of the classifier. Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples. We propose an effective iterative algorithm to generate query-efficient black-box perturbations with small p norms which is confirmed via experimental evaluations on state-of-the-art natural image classifiers. Moreover, for p=2, we theoretically show that our algorithm actually converges to the minimal perturbation when the curvature of the decision boundary is bounded. We also obtain the optimal distribution of the queries over the iterations of the algorithm. Finally, experimental results confirm that our principled black-box attack algorithm performs better than state-of-the-art algorithms as it generates smaller perturbations with a reduced number of queries. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Rahmati_GeoDA_A_Geometric_Framework_for_Black-Box_Adversarial_Attacks_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Rahmati_GeoDA_A_Geometric_Framework_for_Black-Box_Adversarial_Attacks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Rahmati_GeoDA_A_Geometric_Framework_for_Black-Box_Adversarial_Attacks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Rahmati_GeoDA_A_Geometric_CVPR_2020_supplemental.pdf | null | null |
Semantically Multi-Modal Image Synthesis | Zhen Zhu, Zhiliang Xu, Ansheng You, Xiang Bai | In this paper, we focus on semantically multi-modal image synthesis (SMIS) task, namely, generating multi-modal images at the semantic level. Previous work seeks to use multiple class-specific generators, constraining its usage in datasets with a small number of classes. We instead propose a novel Group Decreasing Network (GroupDNet) that leverages group convolutions in the generator and progressively decreases the group numbers of the convolutions in the decoder. Consequently, GroupDNet is armed with much more controllability on translating semantic labels to natural images and has plausible high-quality yields for datasets with many classes. Experiments on several challenging datasets demonstrate the superiority of GroupDNet on performing the SMIS task. We also show that GroupDNet is capable of performing a wide range of interesting synthesis applications. Codes and models are available at: https://github.com/Seanseattle/SMIS. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhu_Semantically_Multi-Modal_Image_Synthesis_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.12697 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhu_Semantically_Multi-Modal_Image_Synthesis_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhu_Semantically_Multi-Modal_Image_Synthesis_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhu_Semantically_Multi-Modal_Image_CVPR_2020_supplemental.pdf | null | null |
Copy and Paste GAN: Face Hallucination From Shaded Thumbnails | Yang Zhang, Ivor W. Tsang, Yawei Luo, Chang-Hui Hu, Xiaobo Lu, Xin Yu | Existing face hallucination methods based on convolutional neural networks (CNN) have achieved impressive performance on low-resolution (LR) faces in a normal illumination condition. However, their performance degrades dramatically when LR faces are captured in low or non-uniform illumination conditions. This paper proposes a Copy and Paste Generative Adversarial Network (CPGAN) to recover authentic high-resolution (HR) face images while compensating for low and non-uniform illumination. To this end, we develop two key components in our CPGAN: internal and external Copy and Paste nets (CPnets). Specifically, our internal CPnet exploits facial information residing in the input image to enhance facial details; while our external CPnet leverages an external HR face for illumination compensation. A new illumination compensation loss is thus developed to capture illumination from the external guided face image effectively. Furthermore, our method offsets illumination and upsamples facial details alternatively in a coarse-to-fine fashion, thus alleviating the correspondence ambiguity between LR inputs and external HR inputs. Extensive experiments demonstrate that our method manifests authentic HR face images in a uniform illumination condition and outperforms state-of-the-art methods qualitatively and quantitatively. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Copy_and_Paste_GAN_Face_Hallucination_From_Shaded_Thumbnails_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.10650 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Copy_and_Paste_GAN_Face_Hallucination_From_Shaded_Thumbnails_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Copy_and_Paste_GAN_Face_Hallucination_From_Shaded_Thumbnails_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhang_Copy_and_Paste_CVPR_2020_supplemental.pdf | null | null |
Leveraging 2D Data to Learn Textured 3D Mesh Generation | Paul Henderson, Vagia Tsiminaki, Christoph H. Lampert | Numerous methods have been proposed for probabilistic generative modelling of 3D objects. However, none of these is able to produce textured objects, which renders them of limited use for practical tasks. In this work, we present the first generative model of textured 3D meshes. Training such a model would traditionally require a large dataset of textured meshes, but unfortunately, existing datasets of meshes lack detailed textures. We instead propose a new training methodology that allows learning from collections of 2D images without any 3D information. To do so, we train our model to explain a distribution of images by modelling each image as a 3D foreground object placed in front of a 2D background. Thus, it learns to generate meshes that when rendered, produce images similar to those in its training set. A well-known problem when generating meshes with deep networks is the emergence of self-intersections, which are problematic for many use-cases. As a second contribution we therefore introduce a new generation process for 3D meshes that guarantees no self-intersections arise, based on the physical intuition that faces should push one another out of the way as they move. We conduct extensive experiments on our approach, reporting quantitative and qualitative results on both synthetic data and natural images. These show our method successfully learns to generate plausible and diverse textured 3D samples for five challenging object classes. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Henderson_Leveraging_2D_Data_to_Learn_Textured_3D_Mesh_Generation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.04180 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Henderson_Leveraging_2D_Data_to_Learn_Textured_3D_Mesh_Generation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Henderson_Leveraging_2D_Data_to_Learn_Textured_3D_Mesh_Generation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Henderson_Leveraging_2D_Data_CVPR_2020_supplemental.pdf | null | null |
Bidirectional Graph Reasoning Network for Panoptic Segmentation | Yangxin Wu, Gengwei Zhang, Yiming Gao, Xiajun Deng, Ke Gong, Xiaodan Liang, Liang Lin | Recent researches on panoptic segmentation resort to a single end-to-end network to combine the tasks of instance segmentation and semantic segmentation. However, prior models only unified the two related tasks at the architectural level via a multi-branch scheme or revealed the underlying correlation between them by unidirectional feature fusion, which disregards the explicit semantic and co-occurrence relations among objects and background. Inspired by the fact that context information is critical to recognize and localize the objects, and inclusive object details are significant to parse the background scene, we thus investigate on explicitly modeling the correlations between object and background to achieve a holistic understanding of an image in the panoptic segmentation task. We introduce a Bidirectional Graph Reasoning Network (BGRNet), which incorporates graph structure into the conventional panoptic segmentation network to mine the intra-modular and inter-modular relations within and between foreground things and background stuff classes. In particular, BGRNet first constructs image-specific graphs in both instance and semantic segmentation branches that enable flexible reasoning at the proposal level and class level, respectively. To establish the correlations between separate branches and fully leverage the complementary relations between things and stuff, we propose a Bidirectional Graph Connection Module to diffuse information across branches in a learnable fashion. Experimental results demonstrate the superiority of our BGRNet that achieves the new state-of-the-art performance on challenging COCO and ADE20K panoptic segmentation benchmarks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wu_Bidirectional_Graph_Reasoning_Network_for_Panoptic_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.06272 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_Bidirectional_Graph_Reasoning_Network_for_Panoptic_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_Bidirectional_Graph_Reasoning_Network_for_Panoptic_Segmentation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wu_Bidirectional_Graph_Reasoning_CVPR_2020_supplemental.pdf | null | null |
High-Order Information Matters: Learning Relation and Topology for Occluded Person Re-Identification | Guan'an Wang, Shuo Yang, Huanyu Liu, Zhicheng Wang, Yang Yang, Shuliang Wang, Gang Yu, Erjin Zhou, Jian Sun | Occluded person re-identification (ReID) aims to match occluded person images to holistic ones across dis-joint cameras. In this paper, we propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment. At first, we use a CNN backbone to learn feature maps and key-points estimation model to extract semantic local features. Even so, occluded images still suffer from occlusion and outliers. Then, we view the extracted local features of an image as nodes of a graph and propose an adaptive direction graph convolutional (ADGC) layer to pass relation information between nodes. The proposed ADGC layer can automatically suppress the message passing of meaningless features by dynamically learning direction and degree of linkage. When aligning two groups of local features, we view it as a graph matching problem and propose a cross-graph embedded-alignment (CGEA) layer to joint learn and embed topology information to local features, and straightly predict similarity score. The proposed CGEA layer can both take full use of alignment learned by graph matching and replace sensitive one-to-one alignment with a robust soft one. Finally, extensive experiments on occluded, partial, and holistic ReID tasks show the effectiveness of our proposed method. Specifically, our framework significantly outperforms state-of-the-art by 6.5% mAP scores on Occluded-Duke dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_High-Order_Information_Matters_Learning_Relation_and_Topology_for_Occluded_Person_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.08177 | https://www.youtube.com/watch?v=9wDqE6D_SVI | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_High-Order_Information_Matters_Learning_Relation_and_Topology_for_Occluded_Person_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_High-Order_Information_Matters_Learning_Relation_and_Topology_for_Occluded_Person_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Self-Supervised Monocular Scene Flow Estimation | Junhwa Hur, Stefan Roth | Scene flow estimation has been receiving increasing attention for 3D environment perception. Monocular scene flow estimation - obtaining 3D structure and 3D motion from two temporally consecutive images - is a highly ill-posed problem, and practical solutions are lacking to date. We propose a novel monocular scene flow method that yields competitive accuracy and real-time performance. By taking an inverse problem view, we design a single convolutional neural network (CNN) that successfully estimates depth and 3D motion simultaneously from a classical optical flow cost volume. We adopt self-supervised learning with 3D loss functions and occlusion reasoning to leverage unlabeled data. We validate our design choices, including the proxy loss and augmentation setup. Our model achieves state-of-the-art accuracy among unsupervised/self-supervised learning approaches to monocular scene flow, and yields competitive results for the optical flow and monocular depth estimation sub-tasks. Semi-supervised fine-tuning further improves the accuracy and yields promising results in real-time. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hur_Self-Supervised_Monocular_Scene_Flow_Estimation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.04143 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Hur_Self-Supervised_Monocular_Scene_Flow_Estimation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Hur_Self-Supervised_Monocular_Scene_Flow_Estimation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Hur_Self-Supervised_Monocular_Scene_CVPR_2020_supplemental.pdf | null | null |
End-to-End Model-Free Reinforcement Learning for Urban Driving Using Implicit Affordances | Marin Toromanoff, Emilie Wirbel, Fabien Moutarde | Reinforcement Learning (RL) aims at learning an optimal behavior policy from its own experiments and not rule-based control methods. However, there is no RL algorithm yet capable of handling a task as difficult as urban driving. We present a novel technique, coined implicit affordances, to effectively leverage RL for urban driving thus including lane keeping, pedestrians and vehicles avoidance, and traffic light detection. To our knowledge we are the first to present a successful RL agent handling such a complex task especially regarding the traffic light detection. Furthermore, we have demonstrated the effectiveness of our method by winning the Camera Only track of the CARLA challenge. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Toromanoff_End-to-End_Model-Free_Reinforcement_Learning_for_Urban_Driving_Using_Implicit_Affordances_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.10868 | https://www.youtube.com/watch?v=hfEos9HpgBA | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Toromanoff_End-to-End_Model-Free_Reinforcement_Learning_for_Urban_Driving_Using_Implicit_Affordances_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Toromanoff_End-to-End_Model-Free_Reinforcement_Learning_for_Urban_Driving_Using_Implicit_Affordances_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Toromanoff_End-to-End_Model-Free_Reinforcement_CVPR_2020_supplemental.pdf | null | null |
Model Adaptation: Unsupervised Domain Adaptation Without Source Data | Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, Si Wu | In this paper, we investigate a challenging unsupervised domain adaptation setting --- unsupervised model adaptation. We aim to explore how to rely only on unlabeled target data to improve performance of an existing source prediction model on the target domain, since labeled source data may not be available in some real-world scenarios due to data privacy issues. For this purpose, we propose a new framework, which is referred to as collaborative class conditional generative adversarial net to bypass the dependence on the source data. Specifically, the prediction model is to be improved through generated target-style data, which provides more accurate guidance for the generator. As a result, the generator and the prediction model can collaborate with each other without source data. Furthermore, due to the lack of supervision from source data, we propose a weight constraint that encourages similarity to the source model. A clustering-based regularization is also introduced to produce more discriminative features in the target domain. Compared to conventional domain adaptation methods, our model achieves superior performance on multiple adaptation tasks with only unlabeled target data, which verifies its effectiveness in this challenging setting. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Model_Adaptation_Unsupervised_Domain_Adaptation_Without_Source_Data_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Model_Adaptation_Unsupervised_Domain_Adaptation_Without_Source_Data_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Model_Adaptation_Unsupervised_Domain_Adaptation_Without_Source_Data_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Model_Adaptation_Unsupervised_CVPR_2020_supplemental.pdf | null | null |
VecRoad: Point-Based Iterative Graph Exploration for Road Graphs Extraction | Yong-Qiang Tan, Shang-Hua Gao, Xuan-Yi Li, Ming-Ming Cheng, Bo Ren | Extracting road graphs from aerial images automatically is more efficient and costs less than from field acquisition. This can be done by a post-processing step that vectorizes road segmentation predicted by CNN, but imperfect predictions will result in road graphs with low connectivity. On the other hand, iterative next move exploration could construct road graphs with better road connectivity, but often focuses on local information and does not provide precise alignment with the real road. To enhance the road connectivity while maintaining the precise alignment between the graph and real road, we propose a point-based iterative graph exploration scheme with segmentation-cues guidance and flexible steps. In our approach, we represent the location of the next move as a 'point' that unifies the representation of multiple constraints such as the direction and step size in each moving step. Information cues such as road segmentation and road junctions are jointly detected and utilized to guide the next move and achieve better alignment of roads. We demonstrate that our proposed method has a considerable improvement over state-of-the-art road graph extraction methods in terms of F-measure and road connectivity metrics on common datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Tan_VecRoad_Point-Based_Iterative_Graph_Exploration_for_Road_Graphs_Extraction_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=xs6Atib6KtY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Tan_VecRoad_Point-Based_Iterative_Graph_Exploration_for_Road_Graphs_Extraction_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Tan_VecRoad_Point-Based_Iterative_Graph_Exploration_for_Road_Graphs_Extraction_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Tan_VecRoad_Point-Based_Iterative_CVPR_2020_supplemental.pdf | null | null |
Uncertainty Based Camera Model Selection | Michal Polic, Stanislav Steidl, Cenek Albl, Zuzana Kukelova, Tomas Pajdla | The quality and speed of Structure from Motion (SfM) methods depend significantly on the camera model chosen for the reconstruction. In most of the SfM pipelines, the camera model is manually chosen by the user. In this paper, we present a new automatic method for camera model selection in large scale SfM that is based on efficient uncertainty evaluation. We first perform an extensive comparison of classical model selection based on known Information Criteria and show that they do not provide sufficiently accurate results when applied to camera model selection. Then we propose a new Accuracy-based Criterion, which evaluates an efficient approximation of the uncertainty of the estimated parameters in tested models. Using the new criterion, we design a camera model selection method and fine-tune it by machine learning. Our simulated and real experiments demonstrate a significant increase in reconstruction quality as well as a considerable speedup of the SfM process. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Polic_Uncertainty_Based_Camera_Model_Selection_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Polic_Uncertainty_Based_Camera_Model_Selection_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Polic_Uncertainty_Based_Camera_Model_Selection_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Polic_Uncertainty_Based_Camera_CVPR_2020_supplemental.pdf | null | null |
Learning a Neural 3D Texture Space From 2D Exemplars | Philipp Henzler, Niloy J. Mitra, Tobias Ritschel | We suggest a generative model of 2D and 3D natural textures with diversity, visual fidelity and at high computational efficiency. This is enabled by a family of methods that extend ideas from classic stochastic procedural texturing (Perlin noise) to learned, deep, non-linearities. Our model encodes all exemplars from a diverse set of textures without a need to be re-trained for each exemplar. Applications include texture interpolation, and learning 3D textures from 2D exemplars. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Henzler_Learning_a_Neural_3D_Texture_Space_From_2D_Exemplars_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.04158 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Henzler_Learning_a_Neural_3D_Texture_Space_From_2D_Exemplars_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Henzler_Learning_a_Neural_3D_Texture_Space_From_2D_Exemplars_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Henzler_Learning_a_Neural_CVPR_2020_supplemental.zip | null | null |
Structure-Preserving Super Resolution With Gradient Guidance | Cheng Ma, Yongming Rao, Yean Cheng, Ce Chen, Jiwen Lu, Jie Zhou | Structures matter in single image super resolution (SISR). Recent studies benefiting from generative adversarial network (GAN) have promoted the development of SISR by recovering photo-realistic images. However, there are always undesired structural distortions in the recovered images. In this paper, we propose a structure-preserving super resolution method to alleviate the above issue while maintaining the merits of GAN-based methods to generate perceptual-pleasant details. Specifically, we exploit gradient maps of images to guide the recovery in two aspects. On the one hand, we restore high-resolution gradient maps by a gradient branch to provide additional structure priors for the SR process. On the other hand, we propose a gradient loss which imposes a second-order restriction on the super-resolved images. Along with the previous image-space loss functions, the gradient-space objectives help generative networks concentrate more on geometric structures. Moreover, our method is model-agnostic, which can be potentially used for off-the-shelf SR networks. Experimental results show that we achieve the best PI and LPIPS performance and meanwhile comparable PSNR and SSIM compared with state-of-the-art perceptual-driven SR methods. Visual results demonstrate our superiority in restoring structures while generating natural SR images. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Ma_Structure-Preserving_Super_Resolution_With_Gradient_Guidance_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.13081 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Ma_Structure-Preserving_Super_Resolution_With_Gradient_Guidance_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Ma_Structure-Preserving_Super_Resolution_With_Gradient_Guidance_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Ma_Structure-Preserving_Super_Resolution_CVPR_2020_supplemental.pdf | null | null |
Neural Contours: Learning to Draw Lines From 3D Shapes | Difan Liu, Mohamed Nabail, Aaron Hertzmann, Evangelos Kalogerakis | This paper introduces a method for learning to generate line drawings from 3D models. Our architecture incorporates a differentiable module operating on geometric features of the 3D model, and an image-based module operating on view-based shape representations. At test time, geometric and view-based reasoning are combined with the help of a neural module to create a line drawing. The model is trained on a large number of crowdsourced comparisons of line drawings. Experiments demonstrate that our method achieves significant improvements in line drawing over the state-of-the-art when evaluated on standard benchmarks, resulting in drawings that are comparable to those produced by experienced human artists. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Neural_Contours_Learning_to_Draw_Lines_From_3D_Shapes_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.10333 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Neural_Contours_Learning_to_Draw_Lines_From_3D_Shapes_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Neural_Contours_Learning_to_Draw_Lines_From_3D_Shapes_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Liu_Neural_Contours_Learning_CVPR_2020_supplemental.pdf | null | null |
An Efficient PointLSTM for Point Clouds Based Gesture Recognition | Yuecong Min, Yanxiao Zhang, Xiujuan Chai, Xilin Chen | Point clouds contain rich spatial information, which provides complementary cues for gesture recognition. In this paper, we formulate gesture recognition as an irregular sequence recognition problem and aim to capture long-term spatial correlations across point cloud sequences. A novel and effective PointLSTM is proposed to propagate information from past to future while preserving the spatial structure. The proposed PointLSTM combines state information from neighboring points in the past with current features to update the current states by a weight-shared LSTM layer. This method can be integrated into many other sequence learning approaches. In the task of gesture recognition, the proposed PointLSTM achieves state-of-the-art results on two challenging datasets (NVGesture and SHREC'17) and outperforms previous skeleton-based methods. To show its advantages in generalization, we evaluate our method on MSR Action3D dataset, and it produces competitive results with previous skeleton-based methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Min_An_Efficient_PointLSTM_for_Point_Clouds_Based_Gesture_Recognition_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Min_An_Efficient_PointLSTM_for_Point_Clouds_Based_Gesture_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Min_An_Efficient_PointLSTM_for_Point_Clouds_Based_Gesture_Recognition_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Min_An_Efficient_PointLSTM_CVPR_2020_supplemental.pdf | null | null |
SCOUT: Self-Aware Discriminant Counterfactual Explanations | Pei Wang, Nuno Vasconcelos | The problem of counterfactual visual explanations is considered. A new family of discriminant explanations is introduced. These produce heatmaps that attribute high scores to image regions informative of a classifier prediction but not of a counter class. They connect attributive explanations, which are based on a single heat map, to counterfactual explanations, which account for both predicted class and counter class. The latter are shown to be computable by combination of two discriminant explanations, with reversed class pairs. It is argued that self-awareness, namely the ability to produce classification confidence scores, is important for the computation of discriminant explanations, which seek to identify regions where it is easy to discriminate between prediction and counter class. This suggests the computation of discriminant explanations by the combination of three attribution maps. The resulting counterfactual explanations are optimization free and thus much faster than previous methods. To address the difficulty of their evaluation, a proxy task and set of quantitative metrics are also proposed. Experiments under this protocol show that the proposed counterfactual explanations outperform the state of the art while achieving speeds much faster, for popular networks. In a human-learning machine teaching experiment, they are also shown to improve mean student accuracy from chance level to 95%. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_SCOUT_Self-Aware_Discriminant_Counterfactual_Explanations_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.07769 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_SCOUT_Self-Aware_Discriminant_Counterfactual_Explanations_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_SCOUT_Self-Aware_Discriminant_Counterfactual_Explanations_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wang_SCOUT_Self-Aware_Discriminant_CVPR_2020_supplemental.pdf | null | null |
Select to Better Learn: Fast and Accurate Deep Learning Using Data Selection From Nonlinear Manifolds | Mohsen Joneidi, Saeed Vahidian, Ashkan Esmaeili, Weijia Wang, Nazanin Rahnavard, Bill Lin, Mubarak Shah | Finding a small subset of data whose linear combination spans other data points, also called column subset selection problem (CSSP), is an important open problem in computer science with many applications in computer vision and deep learning. There are some studies that solve CSSP in a polynomial time complexity w.r.t. the size of the original dataset. A simple and efficient selection algorithm with a linear complexity order, referred to as spectrum pursuit (SP), is proposed that pursuits spectral components of the dataset using available sample points. The proposed non-greedy algorithm aims to iteratively find K data samples whose span is close to that of the first K spectral components of entire data. SP has no parameter to be fine tuned and this desirable property makes it problem-independent. The simplicity of SP enables us to extend the underlying linear model to more complex models such as nonlinear manifolds and graph-based models. The nonlinear extension of SP is introduced as kernel-SP (KSP). The superiority of the proposed algorithms is demonstrated in a wide range of applications. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Joneidi_Select_to_Better_Learn_Fast_and_Accurate_Deep_Learning_Using_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=bnuWTfrdNIA | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Joneidi_Select_to_Better_Learn_Fast_and_Accurate_Deep_Learning_Using_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Joneidi_Select_to_Better_Learn_Fast_and_Accurate_Deep_Learning_Using_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Joneidi_Select_to_Better_CVPR_2020_supplemental.pdf | null | null |
Towards Photo-Realistic Virtual Try-On by Adaptively Generating-Preserving Image Content | Han Yang, Ruimao Zhang, Xiaobao Guo, Wei Liu, Wangmeng Zuo, Ping Luo | Image visual try-on aims at transferring a target clothes image onto a reference person, and has become a hot topic in recent years. Prior arts usually focus on preserving the character of a clothes image (e.g. texture, logo, embroidery) when warping it to arbitrary human pose. However, it remains a big challenge to generate photo-realistic try-on images when large occlusions and human poses are presented in the reference person. To address this issue, we propose a novel visual try-on network, namely Adaptive Content Generating and Preserving Network (ACGPN). In particular, ACGPN first predicts semantic layout of the reference image that will be changed after try-on (e.g.long sleeve shirt-arm, arm-jacket), and then determines whether its image content needs to be generated or preserved according to the predicted semantic layout, leading to photo-realistic try-on and rich clothes details. ACGPN generally involves three major modules. First, a semantic layout generation module utilizes semantic segmentation of the reference image to progressively predict the desired semantic layout after try-on. Second, a clothes warping module warps clothes image according to the generated semantic layout, where a second-order difference constraint is introduced to stabilize the warping process during training.Third, an inpainting module for content fusion integrates all information (e.g. reference image, semantic layout, warped clothes) to adaptively produce each semantic part of human body. In comparison to the state-of-the-art methods, ACGPN can generate photo-realistic images with much better perceptual quality and richer fine-details. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_Towards_Photo-Realistic_Virtual_Try-On_by_Adaptively_Generating-Preserving_Image_Content_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Towards_Photo-Realistic_Virtual_Try-On_by_Adaptively_Generating-Preserving_Image_Content_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Towards_Photo-Realistic_Virtual_Try-On_by_Adaptively_Generating-Preserving_Image_Content_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yang_Towards_Photo-Realistic_Virtual_CVPR_2020_supplemental.pdf | null | null |
Phase Consistent Ecological Domain Adaptation | Yanchao Yang, Dong Lao, Ganesh Sundaramoorthi, Stefano Soatto | We introduce two criteria to regularize the optimization involved in learning a classifier in a domain where no annotated data are available, leveraging annotated data in a different domain, a problem known as unsupervised domain adaptation. We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious. The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving. This restricts the set of possible learned maps, while enabling enough flexibility to transfer semantic information. The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor. It is implemented using a deep neural network that scores the likelihood of each possible segmentation given a single un-annotated image. Incorporating these two priors in a standard domain adaptation framework improves performance across the board in the most common unsupervised domain adaptation benchmarks for semantic segmentation. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_Phase_Consistent_Ecological_Domain_Adaptation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.04923 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Phase_Consistent_Ecological_Domain_Adaptation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Phase_Consistent_Ecological_Domain_Adaptation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Information-Driven Direct RGB-D Odometry | Alejandro Fontan, Javier Civera, Rudolph Triebel | This paper presents an information-theoretic approach to point selection in direct RGB-D odometry. The aim is to select only the most informative measurements, in order to reduce the optimization problem with a minimal impact in the accuracy. It is usual practice in visual odometry/SLAM to track several hundreds of points, achieving real-time performance in high-end desktop PCs. Reducing their computational footprint will facilitate the implementation of odometry and SLAM in low-end platforms such as small robots and AR/VR glasses. Our experimental results show that our novel information-based selection criterion allows us to reduce the number of tracked points an order of magnitude (down to only 24 of them), achieving an accuracy similar to the state of the art (sometimes outperforming it) while reducing 10 times the computational demand. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Fontan_Information-Driven_Direct_RGB-D_Odometry_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Fontan_Information-Driven_Direct_RGB-D_Odometry_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Fontan_Information-Driven_Direct_RGB-D_Odometry_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Single-Side Domain Generalization for Face Anti-Spoofing | Yunpei Jia, Jie Zhang, Shiguang Shan, Xilin Chen | Existing domain generalization methods for face anti-spoofing endeavor to extract common differentiation features to improve the generalization. However, due to large distribution discrepancies among fake faces of different domains, it is difficult to seek a compact and generalized feature space for the fake faces. In this work, we propose an end-to-end single-side domain generalization framework (SSDG) to improve the generalization ability of face anti-spoofing. The main idea is to learn a generalized feature space, where the feature distribution of the real faces is compact while that of the fake ones is dispersed among domains but compact within each domain. Specifically, a feature generator is trained to make only the real faces from different domains undistinguishable, but not for the fake ones, thus forming a single-side adversarial learning. Moreover, an asymmetric triplet loss is designed to constrain the fake faces of different domains separated while the real ones aggregated. The above two points are integrated into a unified framework in an end-to-end training manner, resulting in a more generalized class boundary, especially good for samples from novel domains. Feature and weight normalization is incorporated to further improve the generalization ability. Extensive experiments show that our proposed approach is effective and outperforms the state-of-the-art methods on four public databases. The code is released online. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jia_Single-Side_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.14043 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Jia_Single-Side_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Jia_Single-Side_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Optical Non-Line-of-Sight Physics-Based 3D Human Pose Estimation | Mariko Isogawa, Ye Yuan, Matthew O'Toole, Kris M. Kitani | We describe a method for 3D human pose estimation from transient images (i.e., a 3D spatio-temporal histogram of photons) acquired by an optical non-line-of-sight (NLOS) imaging system. Our method can perceive 3D human pose by 'looking around corners' through the use of light indirectly reflected by the environment. We bring together a diverse set of technologies from NLOS imaging, human pose estimation and deep reinforcement learning to construct an end-to-end data processing pipeline that converts a raw stream of photon measurements into a full 3D human pose sequence estimate. Our contributions are the design of data representation process which includes (1) a learnable inverse point spread function (PSF) to convert raw transient images into a deep feature vector; (2) a neural humanoid control policy conditioned on the transient image feature and learned from interactions with a physics simulator; and (3) a data synthesis and augmentation strategy based on depth data that can be transferred to a real-world NLOS imaging system. Our preliminary experiments suggest that our method is able to generalize to real-world NLOS measurement to estimate physically-valid 3D human poses. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Isogawa_Optical_Non-Line-of-Sight_Physics-Based_3D_Human_Pose_Estimation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.14414 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Isogawa_Optical_Non-Line-of-Sight_Physics-Based_3D_Human_Pose_Estimation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Isogawa_Optical_Non-Line-of-Sight_Physics-Based_3D_Human_Pose_Estimation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Isogawa_Optical_Non-Line-of-Sight_Physics-Based_CVPR_2020_supplemental.zip | null | null |
Barycenters of Natural Images Constrained Wasserstein Barycenters for Image Morphing | Dror Simon, Aviad Aberdam | Image interpolation, or image morphing, refers to a visual transition between two (or more) input images. For such a transition to look visually appealing, its desirable properties are (i) to be smooth; (ii) to apply the minimal required change in the image; and (iii) to seem "real", avoiding unnatural artifacts in each image in the transition. To obtain a smooth and straightforward transition, one may adopt the well-known Wasserstein Barycenter Problem (WBP). While this approach guarantees minimal changes under the Wasserstein metric, the resulting images might seem unnatural. In this work, we propose a novel approach for image morphing that possesses all three desired properties. To this end, we define a constrained variant of the WBP that enforces the intermediate images to satisfy an image prior. We describe an algorithm that solves this problem and demonstrate it using the sparse prior and generative adversarial networks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Simon_Barycenters_of_Natural_Images__Constrained_Wasserstein_Barycenters_for_Image_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=_bFhmfd6C5E | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Simon_Barycenters_of_Natural_Images__Constrained_Wasserstein_Barycenters_for_Image_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Simon_Barycenters_of_Natural_Images__Constrained_Wasserstein_Barycenters_for_Image_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Future Video Synthesis With Object Motion Prediction | Yue Wu, Rongrong Gao, Jaesik Park, Qifeng Chen | We present an approach to predict future video frames given a sequence of continuous video frames in the past. Instead of synthesizing images directly, our approach is designed to understand the complex scene dynamics by decoupling the background scene and moving objects. The appearance of the scene components in the future is predicted by non-rigid deformation of the background and affine transformation of moving objects. The anticipated appearances are combined to create a reasonable video in the future. With this procedure, our method exhibits much less tearing or distortion artifact compared to other approaches. Experimental results on the Cityscapes and KITTI datasets show that our model outperforms the state-of-the-art in terms of visual quality and accuracy. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wu_Future_Video_Synthesis_With_Object_Motion_Prediction_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.00542 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_Future_Video_Synthesis_With_Object_Motion_Prediction_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_Future_Video_Synthesis_With_Object_Motion_Prediction_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Reference-Based Sketch Image Colorization Using Augmented-Self Reference and Dense Semantic Correspondence | Junsoo Lee, Eungyeup Kim, Yunsung Lee, Dongjun Kim, Jaehyuk Chang, Jaegul Choo | This paper tackles the automatic colorization task of a sketch image given an already-colored reference image. Colorizing a sketch image is in high demand in comics, animation, and other content creation applications, but it suffers from information scarcity of a sketch image. To address this, a reference image can render the colorization process in a reliable and user-driven manner. However, it is difficult to prepare for a training data set that has a sufficient amount of semantically meaningful pairs of images as well as the ground truth for a colored image reflecting a given reference (e.g., coloring a sketch of an originally blue car given a reference green car). To tackle this challenge, we propose to utilize the identical image with geometric distortion as a virtual reference, which makes it possible to secure the ground truth for a colored output image. Furthermore, it naturally provides the ground truth for dense semantic correspondence, which we utilize in our internal attention mechanism for color transfer from reference to sketch input. We demonstrate the effectiveness of our approach in various types of sketch image colorization via quantitative as well as qualitative evaluation against existing methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lee_Reference-Based_Sketch_Image_Colorization_Using_Augmented-Self_Reference_and_Dense_Semantic_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.05207 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Reference-Based_Sketch_Image_Colorization_Using_Augmented-Self_Reference_and_Dense_Semantic_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Reference-Based_Sketch_Image_Colorization_Using_Augmented-Self_Reference_and_Dense_Semantic_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Lee_Reference-Based_Sketch_Image_CVPR_2020_supplemental.pdf | null | null |
Collaborative Motion Prediction via Neural Motion Message Passing | Yue Hu, Siheng Chen, Ya Zhang, Xiao Gu | Motion prediction is essential and challenging for autonomous vehicles and social robots. One challenge of motion prediction is to model the interaction among traffic actors, which could cooperate with each other to avoid collisions or form groups. To address this challenge, we propose neural motion message passing (NMMP) to explicitly model the interaction and learn representations for directed interactions between actors. Based on the proposed NMMP, we design the motion prediction systems for two settings: the pedestrian setting and the joint pedestrian and vehicle setting. Both systems share a common pattern: we use an individual branch to model the behavior of a single actor and an interactive branch to model the interaction between actors, while with different wrappers to handle the varied input formats and characteristics. The experimental results show that both systems outperform the previous state-of-the-art methods on several existing benchmarks. Besides, we provide interpretability for interaction learning. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hu_Collaborative_Motion_Prediction_via_Neural_Motion_Message_Passing_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.06594 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Hu_Collaborative_Motion_Prediction_via_Neural_Motion_Message_Passing_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Hu_Collaborative_Motion_Prediction_via_Neural_Motion_Message_Passing_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Hu_Collaborative_Motion_Prediction_CVPR_2020_supplemental.pdf | null | null |
End-to-End Learnable Geometric Vision by Backpropagating PnP Optimization | Bo Chen, Alvaro Parra, Jiewei Cao, Nan Li, Tat-Jun Chin | Deep networks excel in learning patterns from large amounts of data. On the other hand, many geometric vision tasks are specified as optimization problems. To seamlessly combine deep learning and geometric vision, it is vital to perform learning and geometric optimization end-to-end. Towards this aim, we present BPnP, a novel network module that backpropagates gradients through a Perspective-n-Points (PnP) solver to guide parameter updates of a neural network. Based on implicit differentiation, we show that the gradients of a "self-contained" PnP solver can be derived accurately and efficiently, as if the optimizer block were a differentiable function. We validate BPnP by incorporating it in a deep model that can learn camera intrinsics, camera extrinsics (poses) and 3D structure from training datasets. Further, we develop an end-to-end trainable pipeline for object pose estimation, which achieves greater accuracy by combining feature-based heatmap losses with 2D-3D reprojection errors. Since our approach can be extended to other optimization problems, our work helps to pave the way to perform learnable geometric vision in a principled manner. Our PyTorch implementation of BPnP is available on http://github.com/BoChenYS/BPnP. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chen_End-to-End_Learnable_Geometric_Vision_by_Backpropagating_PnP_Optimization_CVPR_2020_paper.pdf | http://arxiv.org/abs/1909.06043 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_End-to-End_Learnable_Geometric_Vision_by_Backpropagating_PnP_Optimization_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_End-to-End_Learnable_Geometric_Vision_by_Backpropagating_PnP_Optimization_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Chen_End-to-End_Learnable_Geometric_CVPR_2020_supplemental.zip | null | null |
Learning Texture Transformer Network for Image Super-Resolution | Fuzhi Yang, Huan Yang, Jianlong Fu, Hongtao Lu, Baining Guo | We study on image super-resolution (SR), which aims to recover realistic textures from a low-resolution (LR) image. Recent progress has been made by taking high-resolution images as references (Ref), so that relevant textures can be transferred to LR images. However, existing SR approaches neglect to use attention mechanisms to transfer high-resolution (HR) textures from Ref images, which limits these approaches in challenging cases. In this paper, we propose a novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively. TTSR consists of four closely-related modules optimized for image generation tasks, including a learnable texture extractor by DNN, a relevance embedding module, a hard-attention module for texture transfer, and a soft-attention module for texture synthesis. Such a design encourages joint feature learning across LR and Ref images, in which deep feature correspondences can be discovered by attention, and thus accurate texture features can be transferred. The proposed texture transformer can be further stacked in a cross-scale way, which enables texture recovery from different levels (e.g., from 1x to 4x magnification). Extensive experiments show that TTSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_Learning_Texture_Transformer_Network_for_Image_Super-Resolution_CVPR_2020_paper.pdf | http://arxiv.org/abs/2006.04139 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Learning_Texture_Transformer_Network_for_Image_Super-Resolution_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Learning_Texture_Transformer_Network_for_Image_Super-Resolution_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yang_Learning_Texture_Transformer_CVPR_2020_supplemental.pdf | null | null |
Distribution-Induced Bidirectional Generative Adversarial Network for Graph Representation Learning | Shuai Zheng, Zhenfeng Zhu, Xingxing Zhang, Zhizhe Liu, Jian Cheng, Yao Zhao | Graph representation learning aims to encode all nodes of a graph into low-dimensional vectors that will serve as input of many computer vision tasks. However, most existing algorithms ignore the existence of inherent data distribution and even noises. This may significantly increase the phenomenon of over-fitting and deteriorate the testing accuracy. In this paper, we propose a Distribution-induced Bidirectional Generative Adversarial Network (named DBGAN) for graph representation learning. Instead of the widely used Gaussian assumption, the prior distribution of latent representation in our DBGAN is estimated in a structure-aware way, which implicitly bridges the graph and content spaces by prototype learning. Thus discriminative and robust representations are generated for all nodes. Furthermore, to improve their generalization ability while preserving representation ability, the sample-level and distribution-level consistency are well balanced via a bidirectional adversarial learning framework. An extensive group of experiments is then carefully designed and presented, demonstrating that our DBGAN obtains remarkably more favorable trade-off between representation and robustness, and meanwhile is dimension-efficient, over currently available alternatives in various tasks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zheng_Distribution-Induced_Bidirectional_Generative_Adversarial_Network_for_Graph_Representation_Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.01899 | https://www.youtube.com/watch?v=u1AD5BulZrs | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zheng_Distribution-Induced_Bidirectional_Generative_Adversarial_Network_for_Graph_Representation_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zheng_Distribution-Induced_Bidirectional_Generative_Adversarial_Network_for_Graph_Representation_Learning_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Benchmarking the Robustness of Semantic Segmentation Models | Christoph Kamann, Carsten Rother | When designing a semantic segmentation module for a practical application, such as autonomous driving, it is crucial to understand the robustness of the module with respect to a wide range of image corruptions. While there are recent robustness studies for full-image classification, we are the first to present an exhaustive study for semantic segmentation, based on the state-of-the-art model DeepLabv3+. To increase the realism of our study, we utilize almost 400,000 images generated from Cityscapes, PASCAL VOC 2012, and ADE20K. Based on the benchmark study, we gain several new insights. Firstly, contrary to full-image classification, model robustness increases with model performance, in most cases. Secondly, some architecture properties affect robustness significantly, such as a Dense Prediction Cell, which was designed to maximize performance on clean data only. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kamann_Benchmarking_the_Robustness_of_Semantic_Segmentation_Models_CVPR_2020_paper.pdf | http://arxiv.org/abs/1908.05005 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kamann_Benchmarking_the_Robustness_of_Semantic_Segmentation_Models_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kamann_Benchmarking_the_Robustness_of_Semantic_Segmentation_Models_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Kamann_Benchmarking_the_Robustness_CVPR_2020_supplemental.pdf | null | null |
Local Implicit Grid Representations for 3D Scenes | Chiyu "Max" Jiang, Avneesh Sud, Ameesh Makadia, Jingwei Huang, Matthias Niessner, Thomas Funkhouser | Shape priors learned from data are commonly used to reconstruct 3D objects from partial or noisy data. Yet no such shape priors are available for indoor scenes, since typical 3D autoencoders cannot handle their scale, complexity, or diversity. In this paper, we introduce Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality. The motivating idea is that most 3D surfaces share geometric details at some scale -- i.e., at a scale smaller than an entire object and larger than a small patch. We train an autoencoder to learn an embedding of local crops of 3D shapes at that size. Then, we use the decoder as a component in a shape optimization that solves for a set of latent codes on a regular grid of overlapping crops such that an interpolation of the decoded local shapes matches a partial or noisy observation. We demonstrate the value of this proposed approach for 3D surface reconstruction from sparse point observations, showing significantly better results than alternative approaches. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jiang_Local_Implicit_Grid_Representations_for_3D_Scenes_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.08981 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_Local_Implicit_Grid_Representations_for_3D_Scenes_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_Local_Implicit_Grid_Representations_for_3D_Scenes_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Jiang_Local_Implicit_Grid_CVPR_2020_supplemental.zip | null | null |
Deformable Siamese Attention Networks for Visual Object Tracking | Yuechen Yu, Yilei Xiong, Weilin Huang, Matthew R. Scott | Siamese-based trackers have achieved excellent performance on visual object tracking. However, the target template is not updated online, and the features of target template and search image are computed independently in a Siamese architecture. In this paper, we propose Deformable Siamese Attention Networks, referred to as SiamAttn, by introducing a new Siamese attention mechanism that computes deformable self-attention and cross-attention. The self-attention learns strong context information via spatial attention, and selectively emphasizes interdependent channel-wise features with channel attention. The crossattention is capable of aggregating rich contextual interdependencies between the target template and the search image, providing an implicit manner to adaptively update the target template. In addition, we design a region refinement module that computes depth-wise cross correlations between the attentional features for more accurate tracking. We conduct experiments on six benchmarks, where our method achieves new state-of-the-art results, outperforming recent strong baseline, SiamRPN++, by 0.464 to 0.537 and 0.415 to 0.470 EAO on VOT 2016 and 2018. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yu_Deformable_Siamese_Attention_Networks_for_Visual_Object_Tracking_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.06711 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Deformable_Siamese_Attention_Networks_for_Visual_Object_Tracking_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Deformable_Siamese_Attention_Networks_for_Visual_Object_Tracking_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yu_Deformable_Siamese_Attention_CVPR_2020_supplemental.pdf | null | null |
Learning Video Object Segmentation From Unlabeled Videos | Xiankai Lu, Wenguan Wang, Jianbing Shen, Yu-Wing Tai, David J. Crandall, Steven C. H. Hoi | We propose a new method for video object segmentation (VOS) that addresses object pattern learning from unlabeled videos, unlike most existing methods which rely heavily on extensive annotated data. We introduce a unified unsupervised/weakly supervised learning framework, called MuG, that comprehensively captures intrinsic properties of VOS at multiple granularities. Our approach can help advance understanding of visual patterns in VOS and significantly reduce annotation burden. With a carefully-designed architecture and strong representation learning ability, our learned model can be applied to diverse VOS settings, including object-level zero-shot VOS, instance-level zero-shot VOS, and one-shot VOS. Experiments demonstrate promising performance in these settings, as well as the potential of MuG in leveraging unlabeled data to further improve the segmentation accuracy. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lu_Learning_Video_Object_Segmentation_From_Unlabeled_Videos_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.05020 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Learning_Video_Object_Segmentation_From_Unlabeled_Videos_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Learning_Video_Object_Segmentation_From_Unlabeled_Videos_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
ScopeFlow: Dynamic Scene Scoping for Optical Flow | Aviram Bar-Haim, Lior Wolf | We propose to modify the common training protocols of optical flow, leading to sizable accuracy improvements without adding to the computational complexity of the training process. The improvement is based on observing the bias in sampling challenging data that exists in the current training protocol, and improving the sampling process. In addition, we find that both regularization and augmentation should decrease during the training protocol. Using an existing low parameters architecture, the method is ranked first on the MPI Sintel benchmark among all other methods, improving the best two frames method accuracy by more than 10%. The method also surpasses all similar architecture variants by more than 12% and 19.7% on the KITTI benchmarks, achieving the lowest Average End-Point Error on KITTI2012 among two-frame methods, without using extra datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Bar-Haim_ScopeFlow_Dynamic_Scene_Scoping_for_Optical_Flow_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Bar-Haim_ScopeFlow_Dynamic_Scene_Scoping_for_Optical_Flow_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Bar-Haim_ScopeFlow_Dynamic_Scene_Scoping_for_Optical_Flow_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Bar-Haim_ScopeFlow_Dynamic_Scene_CVPR_2020_supplemental.zip | null | null |
Context-Aware Human Motion Prediction | Enric Corona, Albert Pumarola, Guillem Alenya, Francesc Moreno-Noguer | The problem of predicting human motion given a sequence of past observations is at the core of many applications in robotics and computer vision. Current state-of-the-art formulates this problem as a sequence-to-sequence task, in which a historical of 3D skeletons feeds a Recurrent Neural Network (RNN) that predicts future movements, typically in the order of 1 to 2 seconds. However, one aspect that has been obviated so far, is the fact that human motion is inherently driven by interactions with objects and/or other humans in the environment. In this paper, we explore this scenario using a novel context-aware motion prediction architecture. We use a semantic-graph model where the nodes parameterize the human and objects in the scene and the edges their mutual interactions. These interactions are iteratively learned through a graph attention layer, fed with the past observations, which now include both object and human body motions. Once this semantic graph is learned, we inject it to a standard RNN to predict future movements of the human/s and object/s. We consider two variants of our architecture, either freezing the contextual interactions in the future of updating them. A thorough evaluation in the Whole-Body Human Motion Database shows that in both cases, our context-aware networks clearly outperform baselines in which the context information is not considered. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Corona_Context-Aware_Human_Motion_Prediction_CVPR_2020_paper.pdf | http://arxiv.org/abs/1904.03419 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Corona_Context-Aware_Human_Motion_Prediction_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Corona_Context-Aware_Human_Motion_Prediction_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
MISC: Multi-Condition Injection and Spatially-Adaptive Compositing for Conditional Person Image Synthesis | Shuchen Weng, Wenbo Li, Dawei Li, Hongxia Jin, Boxin Shi | In this paper, we explore synthesizing person images with multiple conditions for various backgrounds. To this end, we propose a framework named "MISC" for conditional image generation and image compositing. For conditional image generation, we improve the existing condition injection mechanisms by leveraging the inter-condition correlations. For the image compositing, we theoretically prove the weaknesses of the cutting-edge methods, and make it more robust by removing the spatially-invariance constraint, and enabling the bounding mechanism and the spatial adaptability. We show the effectiveness of our method on the Video Instance-level Parsing dataset, and demonstrate the robustness through controllability tests. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Weng_MISC_Multi-Condition_Injection_and_Spatially-Adaptive_Compositing_for_Conditional_Person_Image_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Weng_MISC_Multi-Condition_Injection_and_Spatially-Adaptive_Compositing_for_Conditional_Person_Image_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Weng_MISC_Multi-Condition_Injection_and_Spatially-Adaptive_Compositing_for_Conditional_Person_Image_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Pathological Retinal Region Segmentation From OCT Images Using Geometric Relation Based Augmentation | Dwarikanath Mahapatra, Behzad Bozorgtabar, Ling Shao | Medical image segmentation is important for computer aided diagnosis. Pixelwise manual annotations of large datasets require high expertise and is time consuming. Conventional data augmentations have limited benefit by not fully representing the underlying distribution of the training set, thus affecting model robustness when tested on images captured from different sources. Prior work leverages synthetic images for data augmentation ignoring the interleaved geometric relationship between different anatomical labels. We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape. Latent space variable sampling results in diverse generated images from a base image and improves robustness. Augmented datasets using our method for automatic segmentation of retinal optical coherence tomography (OCT) images outperform existing methods on the public RETOUCH dataset having images captured from different acquisition procedures. Ablation studies and visual analysis also demonstrate benefits of integrating geometry and diversity. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Mahapatra_Pathological_Retinal_Region_Segmentation_From_OCT_Images_Using_Geometric_Relation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.14119 | https://www.youtube.com/watch?v=XhSf4LVMlpc | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Mahapatra_Pathological_Retinal_Region_Segmentation_From_OCT_Images_Using_Geometric_Relation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Mahapatra_Pathological_Retinal_Region_Segmentation_From_OCT_Images_Using_Geometric_Relation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Mahapatra_Pathological_Retinal_Region_CVPR_2020_supplemental.pdf | null | null |
Filter Grafting for Deep Neural Networks | Fanxu Meng, Hao Cheng, Ke Li, Zhixin Xu, Rongrong Ji, Xing Sun, Guangming Lu | This paper proposes a new learning paradigm called filter grafting, which aims to improve the representation capability of Deep Neural Networks (DNNs). The motivation is that DNNs have unimportant (invalid) filters (e.g., l1 norm close to 0). These filters limit the potential of DNNs since they are identified as having little effect on the network. While filter pruning removes these invalid filters for efficiency consideration, filter grafting re-activates them from an accuracy boosting perspective. The activation is processed by grafting external information (weights) into invalid filters. To better perform the grafting process, we develop an entropy-based criterion to measure the information of filters and an adaptive weighting strategy for balancing the grafted information among networks. After the grafting operation, the network has very few invalid filters compared with its untouched state, enpowering the model with more representation capacity. We also perform extensive experiments on the classification and recognition tasks to show the superiority of our method. For example, the grafted MobileNetV2 outperforms the non-grafted MobileNetV2 by about 7 percent on CIFAR-100 dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Meng_Filter_Grafting_for_Deep_Neural_Networks_CVPR_2020_paper.pdf | http://arxiv.org/abs/2001.05868 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Meng_Filter_Grafting_for_Deep_Neural_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Meng_Filter_Grafting_for_Deep_Neural_Networks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Meng_Filter_Grafting_for_CVPR_2020_supplemental.pdf | null | null |
Intuitive, Interactive Beard and Hair Synthesis With Generative Models | Kyle Olszewski, Duygu Ceylan, Jun Xing, Jose Echevarria, Zhili Chen, Weikai Chen, Hao Li | We present an interactive approach to synthesizing realistic variations in facial hair in images, ranging from subtle edits to existing hair to the addition of complex and challenging hair in images of clean-shaven subjects. To circumvent the tedious and computationally expensive tasks of modeling, rendering and compositing the 3D geometry of the target hairstyle using the traditional graphics pipeline, we employ a neural network pipeline that synthesizes realistic and detailed images of facial hair directly in the target image in under one second. The synthesis is controlled by simple and sparse guide strokes from the user defining the general structural and color properties of the target hairstyle. We qualitatively and quantitatively evaluate our chosen method compared to several alternative approaches. We show compelling interactive editing results with a prototype user interface that allows novice users to progressively refine the generated image to match their desired hairstyle, and demonstrate that our approach also allows for flexible and high-fidelity scalp hair synthesis. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Olszewski_Intuitive_Interactive_Beard_and_Hair_Synthesis_With_Generative_Models_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.06848 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Olszewski_Intuitive_Interactive_Beard_and_Hair_Synthesis_With_Generative_Models_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Olszewski_Intuitive_Interactive_Beard_and_Hair_Synthesis_With_Generative_Models_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Olszewski_Intuitive_Interactive_Beard_CVPR_2020_supplemental.zip | null | null |
Global Optimality for Point Set Registration Using Semidefinite Programming | Jose Pedro Iglesias, Carl Olsson, Fredrik Kahl | In this paper we present a study of global optimality conditions for Point Set Registration (PSR) with missing data. PSR is the problem of aligning multiple point clouds with an unknown target point cloud. Since non-linear rotation constraints are present the problem is inherently non-convex and typically relaxed by computing the Lagrange dual, which is a Semidefinite Program (SDP). In this work we show that given a local minimizer the dual variables of the SDP can be computed in closed form. This opens up the possibility of verifying the optimally, using the SDP formulation without explicitly solving it. In addition it allows us to study under what conditions the relaxation is tight, through spectral analysis. We show that if the errors in the (unknown) optimal solution are bounded the SDP formulation will be able to recover it. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Iglesias_Global_Optimality_for_Point_Set_Registration_Using_Semidefinite_Programming_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=xdQnHQ1Eg2U | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Iglesias_Global_Optimality_for_Point_Set_Registration_Using_Semidefinite_Programming_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Iglesias_Global_Optimality_for_Point_Set_Registration_Using_Semidefinite_Programming_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
SQE: a Self Quality Evaluation Metric for Parameters Optimization in Multi-Object Tracking | Yanru Huang, Feiyu Zhu, Zheni Zeng, Xi Qiu, Yuan Shen, Jianan Wu | We present a novel self quality evaluation metric SQE for parameters optimization in the challenging yet critical multi-object tracking task. Current evaluation metrics all require annotated ground truth, thus will fail in the test environment and realistic circumstances prohibiting further optimization after training. By contrast, our metric reflects the internal characteristics of trajectory hypotheses and measures tracking performance without ground truth. We demonstrate that trajectories with different qualities exhibit different single or multiple peaks over feature distance distribution, inspiring us to design a simple yet effective method to assess the quality of trajectories using a two-class Gaussian mixture model. Experiments mainly on MOT16 Challenge data sets verify the effectiveness of our method in both correlating with existing metrics and enabling parameters self-optimization to achieve better performance. We believe that our conclusions and method are inspiring for future multi-object tracking in practice. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Huang_SQE_a_Self_Quality_Evaluation_Metric_for_Parameters_Optimization_in_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.07472 | https://www.youtube.com/watch?v=7JTCqv6vKC0 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_SQE_a_Self_Quality_Evaluation_Metric_for_Parameters_Optimization_in_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_SQE_a_Self_Quality_Evaluation_Metric_for_Parameters_Optimization_in_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Huang_SQE_a_Self_CVPR_2020_supplemental.pdf | null | null |
PointASNL: Robust Point Clouds Processing Using Nonlocal Neural Networks With Adaptive Sampling | Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui | Raw point clouds data inevitably contains outliers or noise through acquisition from 3D sensors or reconstruction algorithms. In this paper, we present a novel end-to-end network for robust point clouds processing, named PointASNL, which can deal with point clouds with noise effectively. The key component in our approach is the adaptive sampling (AS) module. It first re-weights the neighbors around the initial sampled points from farthest point sampling (FPS), and then adaptively adjusts the sampled points beyond the entire point cloud. Our AS module can not only benefit the feature learning of point clouds, but also ease the biased effect of outliers. To further capture the neighbor and long-range dependencies of the sampled point, we proposed a local-nonlocal (L-NL) module inspired by the nonlocal operation. Such L-NL module enables the learning process insensitive to noise. Extensive experiments verify the robustness and superiority of our approach in point clouds processing tasks regardless of synthesis data, indoor data, and outdoor data with or without noise. Specifically, PointASNL achieves state-of-the-art robust performance for classification and segmentation tasks on all datasets, and significantly outperforms previous methods on real-world outdoor SemanticKITTI dataset with considerate noise. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yan_PointASNL_Robust_Point_Clouds_Processing_Using_Nonlocal_Neural_Networks_With_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.00492 | https://www.youtube.com/watch?v=8YYAXsLlWrY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yan_PointASNL_Robust_Point_Clouds_Processing_Using_Nonlocal_Neural_Networks_With_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yan_PointASNL_Robust_Point_Clouds_Processing_Using_Nonlocal_Neural_Networks_With_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yan_PointASNL_Robust_Point_CVPR_2020_supplemental.pdf | null | null |
Minimizing Discrete Total Curvature for Image Processing | Qiuxiang Zhong, Yutong Li, Yijie Yang, Yuping Duan | The curvature regularities have received growing attention with the advantage of providing strong priors in the continuity of edges in image processing applications. However, owing to the non-convex and non-smooth properties of the high-order regularizer, the numerical solution becomes challenging in real-time tasks. In this paper, we propose a novel curvature regularity, the total curvature (TC), by minimizing the normal curvatures along different directions. We estimate the normal curvatures discretely in the local neighborhood according to differential geometry theory. The resulting curvature regularity can be regarded as a re-weighted total variation (TV) minimization problem, which can be efficiently solved by the alternating direction method of multipliers (ADMM) based algorithm. By comparing with TV and Euler's elastica energy, we demonstrate the effectiveness and superiority of the total curvature regularity for various image processing applications. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhong_Minimizing_Discrete_Total_Curvature_for_Image_Processing_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhong_Minimizing_Discrete_Total_Curvature_for_Image_Processing_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhong_Minimizing_Discrete_Total_Curvature_for_Image_Processing_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Unsupervised Learning of Intrinsic Structural Representation Points | Nenglun Chen, Lingjie Liu, Zhiming Cui, Runnan Chen, Duygu Ceylan, Changhe Tu, Wenping Wang | Learning structures of 3D shapes is a fundamental problem in the field of computer graphics and geometry processing. We present a simple yet interpretable unsupervised method for learning a new structural representation in the form of 3D structure points. The 3D structure points produced by our method encode the shape structure intrinsically and exhibit semantic consistency across all the shape instances with similar structures. This is a challenging goal that has not fully been achieved by other methods. Specifically, our method takes a 3D point cloud as input and encodes it as a set of local features. The local features are then passed through a novel point integration module to produce a set of 3D structure points. The chamfer distance is used as reconstruction loss to ensure the structure points lie close to the input point cloud. Extensive experiments have shown that our method outperforms the state-of-the-art on the semantic shape correspondence task and achieves comparable performance with the state-of-the-art on the segmentation label transfer task. Moreover, the PCA based shape embedding built upon consistent structure points demonstrates good performance in preserving the shape structures. Code is available at https://github.com/NolenChen/3DStructurePoints | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chen_Unsupervised_Learning_of_Intrinsic_Structural_Representation_Points_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.01661 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_Unsupervised_Learning_of_Intrinsic_Structural_Representation_Points_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_Unsupervised_Learning_of_Intrinsic_Structural_Representation_Points_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Chen_Unsupervised_Learning_of_CVPR_2020_supplemental.pdf | null | null |
Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision | Denis Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, Sotaro Tsukizawa | Active learning (AL) aims to minimize labeling efforts for data-demanding deep neural networks (DNNs) by selecting the most representative data points for annotation. However, currently used methods are ill-equipped to deal with biased data. The main motivation of this paper is to consider a realistic setting for pool-based semi-supervised AL, where the unlabeled collection of train data is biased. We theoretically derive an optimal acquisition function for AL in this setting. It can be formulated as distribution shift minimization between unlabeled train data and weakly-labeled validation dataset. To implement such acquisition function, we propose a low-complexity method for feature density matching using self-supervised Fisher kernel (FK) as well as several novel pseudo-label estimators. Our FK-based method outperforms state-of-the-art methods on MNIST, SVHN, and ImageNet classification while requiring only 1/10th of processing. The conducted experiments show at least 40% drop in labeling efforts for the biased class-imbalanced data compared to existing methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Gudovskiy_Deep_Active_Learning_for_Biased_Datasets_via_Fisher_Kernel_Self-Supervision_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.00393 | https://www.youtube.com/watch?v=Y3hKxSaEjHc | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Gudovskiy_Deep_Active_Learning_for_Biased_Datasets_via_Fisher_Kernel_Self-Supervision_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Gudovskiy_Deep_Active_Learning_for_Biased_Datasets_via_Fisher_Kernel_Self-Supervision_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Gudovskiy_Deep_Active_Learning_CVPR_2020_supplemental.pdf | null | null |
FGN: Fully Guided Network for Few-Shot Instance Segmentation | Zhibo Fan, Jin-Gang Yu, Zhihao Liang, Jiarong Ou, Changxin Gao, Gui-Song Xia, Yuanqing Li | Few-shot instance segmentation (FSIS) conjoins the few-shot learning paradigm with general instance segmentation, which provides a possible way of tackling instance segmentation in the lack of abundant labeled data for training. This paper presents a Fully Guided Network (FGN) for few-shot instance segmentation. FGN perceives FSIS as a guided model where a so-called support set is encoded and utilized to guide the predictions of a base instance segmentation network (i.e., Mask R-CNN), critical to which is the guidance mechanism. In this view, FGN introduces different guidance mechanisms into the various key components in Mask R-CNN, including Attention-Guided RPN, Relation-Guided Detector, and Attention-Guided FCN, in order to make full use of the guidance effect from the support set and adapt better to the inter-class generalization. Experiments on public datasets demonstrate that our proposed FGN can outperform the state-of-the-art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Fan_FGN_Fully_Guided_Network_for_Few-Shot_Instance_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.13954 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Fan_FGN_Fully_Guided_Network_for_Few-Shot_Instance_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Fan_FGN_Fully_Guided_Network_for_Few-Shot_Instance_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes | Jonas Schult, Francis Engelmann, Theodora Kontogianni, Bastian Leibe | We propose DualConvMesh-Nets (DCM-Net) a family of deep hierarchical convolutional networks over 3D geometric data that combines two types of convolutions. The first type, Geodesic convolutions, defines the kernel weights over mesh surfaces or graphs. That is, the convolutional kernel weights are mapped to the local surface of a given mesh. The second type, Euclidean convolutions, is independent of any underlying mesh structure. The convolutional kernel is applied on a neighborhood obtained from a local affinity representation based on the Euclidean distance between 3D points. Intuitively, geodesic convolutions can easily separate objects that are spatially close but have disconnected surfaces, while Euclidean convolutions can represent interactions between nearby objects better, as they are oblivious to object surfaces. To realize a multi-resolution architecture, we borrow well-established mesh simplification methods from the geometry processing domain and adapt them to define mesh-preserving pooling and unpooling operations. We experimentally show that combining both types of convolutions in our architecture leads to significant performance gains for 3D semantic segmentation, and we report competitive results on three scene segmentation benchmarks. Models and code will be made publicly available. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Schult_DualConvMesh-Net_Joint_Geodesic_and_Euclidean_Convolutions_on_3D_Meshes_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Schult_DualConvMesh-Net_Joint_Geodesic_and_Euclidean_Convolutions_on_3D_Meshes_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Schult_DualConvMesh-Net_Joint_Geodesic_and_Euclidean_Convolutions_on_3D_Meshes_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Schult_DualConvMesh-Net_Joint_Geodesic_CVPR_2020_supplemental.pdf | null | null |
Noise Modeling, Synthesis and Classification for Generic Object Anti-Spoofing | Joel Stehouwer, Amin Jourabloo, Yaojie Liu, Xiaoming Liu | Using printed photograph and replaying videos of biometric modalities, such as iris, fingerprint and face, are common attacks to fool the recognition systems for granting access as the genuine user. With the growing online person-to-person shopping (e.g., Ebay and Craigslist), such attacks also threaten those services, where the online photo illustration might not be captured from real items but from paper or digital screen. Thus, the study of anti-spoofing should be extended from modality-specific solutions to generic-object-based ones. In this work, we define and tackle the problem of Generic Object Anti-Spoofing (GOAS) for the first time. One significant cue to detect these attacks is the noise patterns introduced by the capture sensors and spoof mediums. Different sensor/medium combinations can result in diverse noise patterns. We propose a GAN-based architecture to synthesize and identify the noise patterns from seen and unseen medium/sensor combinations. We show that the procedure of synthesis and identification are mutually beneficial. We further demonstrate the learned GOAS models can directly contribute to modality-specific anti-spoofing without domain transfer. The code and GOSet dataset are available at cvlab.cse.msu.edu/project-goas.html. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Stehouwer_Noise_Modeling_Synthesis_and_Classification_for_Generic_Object_Anti-Spoofing_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.13043 | https://www.youtube.com/watch?v=aDoA2u-hjRo | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Stehouwer_Noise_Modeling_Synthesis_and_Classification_for_Generic_Object_Anti-Spoofing_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Stehouwer_Noise_Modeling_Synthesis_and_Classification_for_Generic_Object_Anti-Spoofing_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
An Investigation Into the Stochasticity of Batch Whitening | Lei Huang, Lei Zhao, Yi Zhou, Fan Zhu, Li Liu, Ling Shao | Batch Normalization (BN) is extensively employed in various network architectures by performing standardization within mini-batches. A full understanding of the process has been a central target in the deep learning communities. Unlike existing works, which usually only analyze the standardization operation, this paper investigates the more general Batch Whitening (BW). Our work originates from the observation that while various whitening transformations equivalently improve the conditioning, they show significantly different behaviors in discriminative scenarios and training Generative Adversarial Networks (GANs). We attribute this phenomenon to the stochasticity that BW introduces. We quantitatively investigate the stochasticity of different whitening transformations and show that it correlates well with the optimization behaviors during training. We also investigate how stochasticity relates to the estimation of population statistics during inference. Based on our analysis, we provide a framework for designing and comparing BW algorithms in different scenarios. Our proposed BW algorithm improves the residual networks by a significant margin on ImageNet classification. Besides, we show that the stochasticity of BW can improve the GAN's performance with, however, the sacrifice of the training stability. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Huang_An_Investigation_Into_the_Stochasticity_of_Batch_Whitening_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.12327 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_An_Investigation_Into_the_Stochasticity_of_Batch_Whitening_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_An_Investigation_Into_the_Stochasticity_of_Batch_Whitening_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Huang_An_Investigation_Into_CVPR_2020_supplemental.pdf | null | null |
VIBE: Video Inference for Human Body Pose and Shape Estimation | Muhammed Kocabas, Nikos Athanasiou, Michael J. Black | Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methods fail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose "Video Inference for Body Pose and Shape Estimation" (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a novel temporal network architecture with a self-attention mechanism and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kocabas_VIBE_Video_Inference_for_Human_Body_Pose_and_Shape_Estimation_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.05656 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kocabas_VIBE_Video_Inference_for_Human_Body_Pose_and_Shape_Estimation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kocabas_VIBE_Video_Inference_for_Human_Body_Pose_and_Shape_Estimation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Kocabas_VIBE_Video_Inference_CVPR_2020_supplemental.pdf | null | null |
IDA-3D: Instance-Depth-Aware 3D Object Detection From Stereo Vision for Autonomous Driving | Wanli Peng, Hao Pan, He Liu, Yi Sun | 3D object detection is an important scene understanding task in autonomous driving and virtual reality. Approaches based on LiDAR technology have high performance, but LiDAR is expensive. Considering more general scenes, where there is no LiDAR data in the 3D datasets, we propose a 3D object detection approach from stereo vision which does not rely on LiDAR data either as input or as supervision in training, but solely takes RGB images with corresponding annotated 3D bounding boxes as training data. As depth estimation of object is the key factor affecting the performance of 3D object detection, we introduce an Instance-DepthAware (IDA) module which accurately predicts the depth of the 3D bounding box's center by instance-depth awareness, disparity adaptation and matching cost reweighting. Moreover, our model is an end-to-end learning framework which does not require multiple stages or postprocessing algorithm. We provide detailed experiments on KITTI benchmark and achieve impressive improvements compared with the existing image-based methods. Our code is available at https://github.com/swords123/IDA-3D. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Peng_IDA-3D_Instance-Depth-Aware_3D_Object_Detection_From_Stereo_Vision_for_Autonomous_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=nXzpB7JzArs | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Peng_IDA-3D_Instance-Depth-Aware_3D_Object_Detection_From_Stereo_Vision_for_Autonomous_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Peng_IDA-3D_Instance-Depth-Aware_3D_Object_Detection_From_Stereo_Vision_for_Autonomous_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Peng_IDA-3D_Instance-Depth-Aware_3D_CVPR_2020_supplemental.pdf | null | null |
FroDO: From Detections to 3D Objects | Martin Runz, Kejie Li, Meng Tang, Lingni Ma, Chen Kong, Tanner Schmidt, Ian Reid, Lourdes Agapito, Julian Straub, Steven Lovegrove, Richard Newcombe | Object-oriented maps are important for scene understanding since they jointly capture geometry and semantics, allow individual instantiation and meaningful reasoning about objects. We introduce FroDO, a method for accurate 3D reconstruction of object instances from RGB video that infers their location, pose and shape in a coarse to fine manner. Key to FroDO is to embed object shapes in a novel learnt shape space that allows seamless switching between sparse point cloud and dense DeepSDF decoding. Given an input sequence of localized RGB frames, FroDO first aggregates 2D detections to instantiate a 3D bounding box per object. A shape code is regressed using an encoder network before optimizing shape and pose further under the learnt shape priors using sparse or dense shape representations. The optimization uses multi-view geometric, photometric and silhouette losses. We evaluate on real-world datasets, including Pix3D, Redwood-OS, and ScanNet, for single-view, multi-view, and multi-object reconstruction. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Runz_FroDO_From_Detections_to_3D_Objects_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.05125 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Runz_FroDO_From_Detections_to_3D_Objects_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Runz_FroDO_From_Detections_to_3D_Objects_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Runz_FroDO_From_Detections_CVPR_2020_supplemental.pdf | null | null |
KeypointNet: A Large-Scale 3D Keypoint Dataset Aggregated From Numerous Human Annotations | Yang You, Yujing Lou, Chengkun Li, Zhoujun Cheng, Liangwei Li, Lizhuang Ma, Cewu Lu, Weiming Wang | Detecting 3D objects keypoints is ofgreat interest to the areas of both graphics and computer vision. There have been several 2D and 3D keypoint datasets aiming to address this problem in a data-driven way. These datasets, however, either lack scalability or bring ambiguity to the definition of keypoints. Therefore, we present KeypointNet: the first large-scale and diverse 3D keypoint dataset that contains 83,231 keypoints and 8,329 3D models from 16 object categories, by leveraging numerous human annotations. To handle the inconsistency between annotations from different people, we propose a novel method to aggregate these keypoints automatically, through minimization of a fidelity loss. Finally, ten state-of-the-art methods are benchmarked on our proposed dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/You_KeypointNet_A_Large-Scale_3D_Keypoint_Dataset_Aggregated_From_Numerous_Human_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.12687 | https://www.youtube.com/watch?v=_xy8h1M8Ejs | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/You_KeypointNet_A_Large-Scale_3D_Keypoint_Dataset_Aggregated_From_Numerous_Human_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/You_KeypointNet_A_Large-Scale_3D_Keypoint_Dataset_Aggregated_From_Numerous_Human_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Bridging the Gap Between Anchor-Based and Anchor-Free Detection via Adaptive Training Sample Selection | Shifeng Zhang, Cheng Chi, Yongqiang Yao, Zhen Lei, Stan Z. Li | Object detection has been dominated by anchor-based detectors for several years. Recently, anchor-free detectors have become popular due to the proposal of FPN and Focal Loss. In this paper, we first point out that the essential difference between anchor-based and anchor-free detection is actually how to define positive and negative training samples, which leads to the performance gap between them. If they adopt the same definition of positive and negative samples during training, there is no obvious difference in the final performance, no matter regressing from a box or a point. This shows that how to select positive and negative training samples is important for current object detectors. Then, we propose an Adaptive Training Sample Selection (ATSS) to automatically select positive and negative samples according to statistical characteristics of object. It significantly improves the performance of anchor-based and anchor-free detectors and bridges the gap between them. Finally, we discuss the necessity of tiling multiple anchors per location on the image to detect objects. Extensive experiments conducted on MS COCO support our aforementioned analysis and conclusions. With the newly introduced ATSS, we improve state-of-the-art detectors by a large margin to 50.7% AP without introducing any overhead. The code is available at https://github.com/sfzhang15/ATSS. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Bridging_the_Gap_Between_Anchor-Based_and_Anchor-Free_Detection_via_Adaptive_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.02424 | https://www.youtube.com/watch?v=CnchEvVhI3c | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Bridging_the_Gap_Between_Anchor-Based_and_Anchor-Free_Detection_via_Adaptive_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Bridging_the_Gap_Between_Anchor-Based_and_Anchor-Free_Detection_via_Adaptive_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Structure Aware Single-Stage 3D Object Detection From Point Cloud | Chenhang He, Hui Zeng, Jianqiang Huang, Xian-Sheng Hua, Lei Zhang | 3D object detection from point cloud data plays an essential role in autonomous driving. Current single-stage detectors are efficient by progressively downscaling the 3D point clouds in a fully convolutional manner. However, the downscaled features inevitably lose spatial information and cannot make full use of the structure information of 3D point cloud, degrading their localization precision. In this work, we propose to improve the localization precision of single-stage detectors by explicitly leveraging the structure information of 3D point cloud. Specifically, we design an auxiliary network which converts the convolutional features in the backbone network back to point-level representations. The auxiliary network is jointly optimized, by two point-level supervisions, to guide the convolutional features in the backbone network to be aware of the object structure. The auxiliary network can be detached after training and therefore introduces no extra computation in the inference stage. Besides, considering that single-stage detectors suffer from the discordance between the predicted bounding boxes and corresponding classification confidences, we develop an efficient part-sensitive warping operation to align the confidences to the predicted bounding boxes. Our proposed detector ranks at the top of KITTI 3D/BEV detection leaderboards and runs at 25 FPS for inference. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
MINA: Convex Mixed-Integer Programming for Non-Rigid Shape Alignment | Florian Bernard, Zeeshan Khan Suri, Christian Theobalt | We present a convex mixed-integer programming formulation for non-rigid shape matching. To this end, we propose a novel shape deformation model based on an efficient low-dimensional discrete model, so that finding a globally optimal solution is tractable in (most) practical cases. Our approach combines several favourable properties, namely it is independent of the initialisation, it is much more efficient to solve to global optimality compared to analogous quadratic assignment problem formulations, and it is highly flexible in terms of the variants of matching problems it can handle. Experimentally we demonstrate that our approach outperforms existing methods for sparse shape matching, that it can be used for initialising dense shape matching methods, and we showcase its flexibility on several examples. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Bernard_MINA_Convex_Mixed-Integer_Programming_for_Non-Rigid_Shape_Alignment_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.12623 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Bernard_MINA_Convex_Mixed-Integer_Programming_for_Non-Rigid_Shape_Alignment_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Bernard_MINA_Convex_Mixed-Integer_Programming_for_Non-Rigid_Shape_Alignment_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Bernard_MINA_Convex_Mixed-Integer_CVPR_2020_supplemental.pdf | null | null |
Enhanced Transport Distance for Unsupervised Domain Adaptation | Mengxue Li, Yi-Ming Zhai, You-Wei Luo, Peng-Fei Ge, Chuan-Xian Ren | Unsupervised domain adaptation (UDA) is a representative problem in transfer learning, which aims to improve the classification performance on an unlabeled target domain by exploiting discriminant information from a labeled source domain. The optimal transport model has been used for UDA in the perspective of distribution matching. However, the transport distance cannot reflect the discriminant information from either domain knowledge or category prior. In this work, we propose an enhanced transport distance (ETD) for UDA. This method builds an attention-aware transport distance, which can be viewed as the prediction feedback of the iteratively learned classifier, to measure the domain discrepancy. Further, the Kantorovich potential variable is re-parameterized by deep neural networks to learn the distribution in the latent space. The entropy-based regularization is developed to explore the intrinsic structure of the target domain. The proposed method is optimized alternately in an end-to-end manner. Extensive experiments are conducted on four benchmark datasets to demonstrate the SOTA performance of ETD. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Enhanced_Transport_Distance_for_Unsupervised_Domain_Adaptation_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Enhanced_Transport_Distance_for_Unsupervised_Domain_Adaptation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Enhanced_Transport_Distance_for_Unsupervised_Domain_Adaptation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
A Local-to-Global Approach to Multi-Modal Movie Scene Segmentation | Anyi Rao, Linning Xu, Yu Xiong, Guodong Xu, Qingqiu Huang, Bolei Zhou, Dahua Lin | Scene, as the crucial unit of storytelling in movies, contains complex activities of actors and their interactions in a physical environment. Identifying the composition of scenes serves as a critical step towards semantic understanding of movies. This is very challenging - compared to the videos studied in conventional vision problems, e.g. action recognition, as scenes in movies usually contain much richer temporal structures and more complex semantic information. Towards this goal, we scale up the scene segmentation task by building a large-scale video dataset MovieScenes, which contains 21K annotated scene segments from 150 movies. We further propose a local-to-global scene segmentation framework, which integrates multi-modal information across three levels, i.e. clip, segment, and movie. This framework is able to distill complex semantics from hierarchical temporal structures over a long movie, providing top-down guidance for scene segmentation. Our experiments show that the proposed network is able to segment a movie into scenes with high accuracy, consistently outperforming previous methods. We also found that pretraining on our MovieScenes can bring significant improvements to the existing approaches. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Rao_A_Local-to-Global_Approach_to_Multi-Modal_Movie_Scene_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.02678 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Rao_A_Local-to-Global_Approach_to_Multi-Modal_Movie_Scene_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Rao_A_Local-to-Global_Approach_to_Multi-Modal_Movie_Scene_Segmentation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Rao_A_Local-to-Global_Approach_CVPR_2020_supplemental.pdf | null | null |
TA-Student VQA: Multi-Agents Training by Self-Questioning | Peixi Xiong, Ying Wu | There are two main challenges in Visual Question Answering (VQA). The first one is that each model obtains its strengths and shortcomings when applied to several questions; what is more, the "ceiling effect" for specific questions is difficult to overcome with simple consecutive training. The second challenge is that even the state-of-the-art dataset is of large scale, questions targeted at a single image are off in format and lack diversity in content. We introduce our self-questioning model with multi-agent training: TA-student VQA. This framework differs from standard VQA algorithms by involving question-generating mechanisms and collaborative learning questions between question-answering agents. Thus, TA-student VQA overcomes the limitation of the content diversity and format variation of questions and improves the overall performance of multiple question-answering agents. We evaluate our model on VQA-v2, which outperforms algorithms without such mechanisms. In addition, TA-student VQA achieves a greater model capacity, allowing it to answer more generated questions in addition to those in the annotated datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xiong_TA-Student_VQA_Multi-Agents_Training_by_Self-Questioning_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xiong_TA-Student_VQA_Multi-Agents_Training_by_Self-Questioning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xiong_TA-Student_VQA_Multi-Agents_Training_by_Self-Questioning_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Deep Structure-Revealed Network for Texture Recognition | Wei Zhai, Yang Cao, Zheng-Jun Zha, HaiYong Xie, Feng Wu | Texture recognition is a challenging visual task since various primitives along with their arrangements can be recognized from a same texture image when perceiving with different contexts. Some recent work building on CNNs exploits orderless aggregating to provide invariance to spatial arrangements. However, these methods ignore the inherent structural property of textures, which is a critical cue for distinguishing and describing texture images in the wild. To address this problem, we propose a novel Deep Structure-Revealed Network (DSR-Net) that leverages spatial dependency among the captured primitives as structural representation for texture recognition. Specifically, a primitive capturing module (PCM) is devised to generate multiple primitives from eight directional spatial contexts, in which deep features are firstly extracted under the constrains of direction map and then encoded based on the similarities of neighborhood. Next, these primitives are associated with a dependence learning module (DLM) to generate structural representation, in which a two-way collaborative relationship strategy is introduced to perceive the spatial dependencies among multiple primitives. At last, the structure-revealed texture representations are integrated with spatial ordered information to achieve real-world texture recognition. Evaluation on the five most challenging texture recognition datasets has demonstrated the superiority of the proposed model against state-of-the-art methods. The structure-revealed performances of DSR-Net are further verified on some extensive experiments, including fine-grained classification and semantic segmentation. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhai_Deep_Structure-Revealed_Network_for_Texture_Recognition_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhai_Deep_Structure-Revealed_Network_for_Texture_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhai_Deep_Structure-Revealed_Network_for_Texture_Recognition_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Dynamic Traffic Modeling From Overhead Imagery | Scott Workman, Nathan Jacobs | Our goal is to use overhead imagery to understand patterns in traffic flow, for instance answering questions such as how fast could you traverse Times Square at 3am on a Sunday. A traditional approach for solving this problem would be to model the speed of each road segment as a function of time. However, this strategy is limited in that a significant amount of data must first be collected before a model can be used and it fails to generalize to new areas. Instead, we propose an automatic approach for generating dynamic maps of traffic speeds using convolutional neural networks. Our method operates on overhead imagery, is conditioned on location and time, and outputs a local motion model that captures likely directions of travel and corresponding travel speeds. To train our model, we take advantage of historical traffic data collected from New York City. Experimental results demonstrate that our method can be applied to generate accurate city-scale traffic models. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Workman_Dynamic_Traffic_Modeling_From_Overhead_Imagery_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Workman_Dynamic_Traffic_Modeling_From_Overhead_Imagery_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Workman_Dynamic_Traffic_Modeling_From_Overhead_Imagery_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Workman_Dynamic_Traffic_Modeling_CVPR_2020_supplemental.pdf | null | null |
In Defense of Grid Features for Visual Question Answering | Huaizu Jiang, Ishan Misra, Marcus Rohrbach, Erik Learned-Miller, Xinlei Chen | Popularized as `bottom-up' attention, bounding box (or region) based visual features have recently surpassed vanilla grid-based convolutional features as the de facto standard for vision and language tasks like visual question answering (VQA). However, it is not clear whether the advantages of regions (e.g. better localization) are the key reasons for the success of bottom-up attention. In this paper, we revisit grid features for VQA, and find they can work surprisingly well -- running more than an order of magnitude faster with the same accuracy (e.g. if pre-trained in a similar fashion). Through extensive experiments, we verify that this observation holds true across different VQA models (reporting a state-of-the-art accuracy on VQA 2.0 test-std, 72.71), datasets, and generalizes well to other tasks like image captioning. As grid features make the model design and training process much simpler, this enables us to train them end-to-end and also use a more flexible network design. We learn VQA models end-to-end, from pixels directly to answers, and show that strong performance is achievable without using any region annotations in pre-training. We hope our findings help further improve the scientific understanding and the practical application of VQA. Code and features will be made available. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jiang_In_Defense_of_Grid_Features_for_Visual_Question_Answering_CVPR_2020_paper.pdf | http://arxiv.org/abs/2001.03615 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_In_Defense_of_Grid_Features_for_Visual_Question_Answering_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_In_Defense_of_Grid_Features_for_Visual_Question_Answering_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Jiang_In_Defense_of_CVPR_2020_supplemental.pdf | null | null |
Rethinking the Route Towards Weakly Supervised Object Localization | Chen-Lin Zhang, Yun-Hao Cao, Jianxin Wu | Weakly supervised object localization (WSOL) aims to localize objects with only image-level labels. Previous methods often try to utilize feature maps and classification weights to localize objects using image level annotations indirectly. In this paper, we demonstrate that weakly supervised object localization should be divided into two parts: class-agnostic object localization and object classification. For class-agnostic object localization, we should use class-agnostic methods to generate noisy pseudo annotations and then perform bounding box regression on them without class labels. We propose the pseudo supervised object localization (PSOL) method as a new way to solve WSOL. Our PSOL models have good transferability across different datasets without fine-tuning. With generated pseudo bounding boxes, we achieve 58.00% localization accuracy on ImageNet and 74.74% localization accuracy on CUB-200, which have a large edge over previous models. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Rethinking_the_Route_Towards_Weakly_Supervised_Object_Localization_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.11359 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Rethinking_the_Route_Towards_Weakly_Supervised_Object_Localization_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Rethinking_the_Route_Towards_Weakly_Supervised_Object_Localization_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Weakly Supervised Fine-Grained Image Classification via Guassian Mixture Model Oriented Discriminative Learning | Zhihui Wang, Shijie Wang, Shuhui Yang, Haojie Li, Jianjun Li, Zezhou Li | Existing weakly supervised fine-grained image recognition (WFGIR) methods usually pick out the discriminative regions from the high-level feature maps directly. We discover that due to the operation of stacking local receptive filed, Convolutional Neural Network causes the discriminative region diffusion in high-level feature maps, which leads to inaccurate discriminative region localization. In this paper, we propose an end-to-end Discriminative Feature-oriented Gaussian Mixture Model (DF-GMM), to address the problem of discriminative region diffusion and find better fine-grained details. Specifically, DF-GMM consists of 1) a low-rank representation mechanism (LRM), which learns a set of low-rank discriminative bases by Gaussian Mixture Model (GMM) in high-level semantic feature maps to improve discriminative ability of feature representation, 2) a low-rank representation reorganization mechanism (LR ^2 M) which resumes the space information corresponding to low-rank discriminative bases to reconstruct the low-rank feature maps. It alleviates the discriminative region diffusion problem and locate discriminative regions more precisely. Extensive experiments verify that DF-GMM yields the best performance under the same settings with the most competitive approaches, in CUB-Bird, Stanford-Cars datasets, and FGVC Aircraft. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Weakly_Supervised_Fine-Grained_Image_Classification_via_Guassian_Mixture_Model_Oriented_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=RS0yKbKYSKY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Weakly_Supervised_Fine-Grained_Image_Classification_via_Guassian_Mixture_Model_Oriented_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Weakly_Supervised_Fine-Grained_Image_Classification_via_Guassian_Mixture_Model_Oriented_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Violin: A Large-Scale Dataset for Video-and-Language Inference | Jingzhou Liu, Wenhu Chen, Yu Cheng, Zhe Gan, Licheng Yu, Yiming Yang, Jingjing Liu | We introduce a new task, Video-and-Language Inference, for joint multimodal understanding of video and text. Given a video clip with aligned subtitles as premise, paired with a natural language hypothesis based on the video content, a model needs to infer whether the hypothesis is entailed or contradicted by the given video clip. A new large-scale dataset, named Violin (VIdeO-and-Language INference), is introduced for this task, which consists of 95,322 video-hypothesis pairs from 15,887 video clips, spanning over 582 hours of video. These video clips contain rich content with diverse temporal dynamics, event shifts, and people interactions, collected from two sources: (i) popular TV shows, and (ii) movie clips from YouTube channels. In order to address our new multimodal inference task, a model is required to possess sophisticated reasoning skills, from surface-level grounding (e.g., identifying objects and characters in the video) to in-depth commonsense reasoning (e.g., inferring causal relations of events in the video). We present a detailed analysis of the dataset and an extensive evaluation over many strong baselines, providing valuable insights on the challenges of this new task. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Violin_A_Large-Scale_Dataset_for_Video-and-Language_Inference_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.11618 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Violin_A_Large-Scale_Dataset_for_Video-and-Language_Inference_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Violin_A_Large-Scale_Dataset_for_Video-and-Language_Inference_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Liu_Violin_A_Large-Scale_CVPR_2020_supplemental.pdf | null | null |
Local Context Normalization: Revisiting Local Normalization | Anthony Ortiz, Caleb Robinson, Dan Morris, Olac Fuentes, Christopher Kiekintveld, Md Mahmudulla Hassan, Nebojsa Jojic | Normalization layers have been shown to improve convergence in deep neural networks, and even add useful inductive biases. In many vision applications the local spatial context of the features is important, but most common normalization schemes including Group Normalization (GN), Instance Normalization (IN), and Layer Normalization (LN) normalize over the entire spatial dimension of a feature. This can wash out important signals and degrade performance. For example, in applications that use satellite imagery, input images can be arbitrarily large; consequently, it is nonsensical to normalize over the entire area. Positional Normalization (PN), on the other hand, only normalizes over a single spatial position at a time. A natural compromise is to normalize features by local context, while also taking into account group level information. In this pa- per, we propose Local Context Normalization (LCN): a normalization layer where every feature is normalized based on a window around it and the filters in its group. We propose an algorithmic solution to make LCN efficient for arbitrary window sizes, even if every point in the image has a unique window. LCN outperforms its Batch Normalization (BN), GN, IN, and LN counterparts for object detection, semantic segmentation, and instance segmentation applications in several benchmark datasets, while keeping performance in- dependent of the batch size and facilitating transfer learning. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Ortiz_Local_Context_Normalization_Revisiting_Local_Normalization_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.05845 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Ortiz_Local_Context_Normalization_Revisiting_Local_Normalization_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Ortiz_Local_Context_Normalization_Revisiting_Local_Normalization_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Tangent Images for Mitigating Spherical Distortion | Marc Eder, Mykhailo Shvets, John Lim, Jan-Michael Frahm | In this work, we propose "tangent images," a spherical image representation that facilitates transferable and scalable 360 degree computer vision. Inspired by techniques in cartography and computer graphics, we render a spherical image to a set of distortion-mitigated, locally-planar image grids tangent to a subdivided icosahedron. By varying the resolution of these grids independently of the subdivision level, we can effectively represent high resolution spherical images while still benefiting from the low-distortion icosahedral spherical approximation. We show that training standard convolutional neural networks on tangent images compares favorably to the many specialized spherical convolutional kernels that have been developed, while also scaling efficiently to handle significantly higher spherical resolutions. Furthermore, because our approach does not require specialized kernels, we show that we can transfer networks trained on perspective images to spherical data without fine-tuning and with limited performance drop-off. Finally, we demonstrate that tangent images can be used to improve the quality of sparse feature detection on spherical images, illustrating its usefulness for traditional computer vision tasks like structure-from-motion and SLAM. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Eder_Tangent_Images_for_Mitigating_Spherical_Distortion_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.09390 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Eder_Tangent_Images_for_Mitigating_Spherical_Distortion_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Eder_Tangent_Images_for_Mitigating_Spherical_Distortion_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Eder_Tangent_Images_for_CVPR_2020_supplemental.pdf | null | null |
Normalized and Geometry-Aware Self-Attention Network for Image Captioning | Longteng Guo, Jing Liu, Xinxin Zhu, Peng Yao, Shichen Lu, Hanqing Lu | Self-attention (SA) network has shown profound value in image captioning. In this paper, we improve SA from two aspects to promote the performance of image captioning. First, we propose Normalized Self-Attention (NSA), a reparameterization of SA that brings the benefits of normalization inside SA. While normalization is previously only applied outside SA, we introduce a novel normalization method and demonstrate that it is both possible and beneficial to perform it on the hidden activations inside SA. Second, to compensate for the major limit of Transformer that it fails to model the geometry structure of the input objects, we propose a class of Geometry-aware Self-Attention (GSA) that extends SA to explicitly and efficiently consider the relative geometry relations between the objects in the image. To construct our image captioning model, we combine the two modules and apply it to the vanilla self-attention network. We extensively evaluate our proposals on MS-COCO image captioning dataset and superior results are achieved when comparing to state-of-the-art approaches. Further experiments on three challenging tasks, i.e. video captioning, machine translation, and visual question answering, show the generality of our methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Guo_Normalized_and_Geometry-Aware_Self-Attention_Network_for_Image_Captioning_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.08897 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Normalized_and_Geometry-Aware_Self-Attention_Network_for_Image_Captioning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Normalized_and_Geometry-Aware_Self-Attention_Network_for_Image_Captioning_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Guo_Normalized_and_Geometry-Aware_CVPR_2020_supplemental.pdf | null | null |
Deepstrip: High-Resolution Boundary Refinement | Peng Zhou, Brian Price, Scott Cohen, Gregg Wilensky, Larry S. Davis | In this paper, we target refining the boundaries in high resolution images given low resolution masks. For memory and computation efficiency, we propose to convert the regions of interest into strip images and compute a boundary prediction in the strip domain. To detect the target boundary, we present a framework with two prediction layers. First, all potential boundaries are predicted as an initial prediction and then a selection layer is used to pick the target boundary and smooth the result. To encourage accurate prediction, a loss which measures the boundary distance in strip domain is introduced. In addition, we enforce a matching consistency and C0 continuity regularization to the network to reduce false alarms. Extensive experiments on both public and a newly created high resolution dataset strongly validate our approach. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhou_Deepstrip_High-Resolution_Boundary_Refinement_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.11670 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Deepstrip_High-Resolution_Boundary_Refinement_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Deepstrip_High-Resolution_Boundary_Refinement_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhou_Deepstrip_High-Resolution_Boundary_CVPR_2020_supplemental.pdf | null | null |
xMUDA: Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation | Maximilian Jaritz, Tuan-Hung Vu, Raoul de Charette, Emilie Wirbel, Patrick Perez | Unsupervised Domain Adaptation (UDA) is crucial to tackle the lack of annotations in a new domain. There are many multi-modal datasets, but most UDA approaches are uni-modal. In this work, we explore how to learn from multi-modality and propose cross-modal UDA (xMUDA) where we assume the presence of 2D images and 3D point clouds for 3D semantic segmentation. This is challenging as the two input spaces are heterogeneous and can be impacted differently by domain shift. In xMUDA, modalities learn from each other through mutual mimicking, disentangled from the segmentation objective, to prevent the stronger modality from adopting false predictions from the weaker one. We evaluate on new UDA scenarios including day-to-night, country-to-country and dataset-to-dataset, leveraging recent autonomous driving datasets. xMUDA brings large improvements over uni-modal UDA on all tested scenarios, and is complementary to state-of-the-art UDA techniques. Code is available at https://github.com/valeoai/xmuda. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jaritz_xMUDA_Cross-Modal_Unsupervised_Domain_Adaptation_for_3D_Semantic_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.12676 | https://www.youtube.com/watch?v=gwcXWsQli8E | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Jaritz_xMUDA_Cross-Modal_Unsupervised_Domain_Adaptation_for_3D_Semantic_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Jaritz_xMUDA_Cross-Modal_Unsupervised_Domain_Adaptation_for_3D_Semantic_Segmentation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Jaritz_xMUDA_Cross-Modal_Unsupervised_CVPR_2020_supplemental.pdf | null | null |
Seeing without Looking: Contextual Rescoring of Object Detections for AP Maximization | Lourenco V. Pato, Renato Negrinho, Pedro M. Q. Aguiar | The majority of current object detectors lack context: class predictions are made independently from other detections. We propose to incorporate context in object detection by post-processing the output of an arbitrary detector to rescore the confidences of its detections. Rescoring is done by conditioning on contextual information from the entire set of detections: their confidences, predicted classes, and positions. We show that AP can be improved by simply reassigning the detection confidence values such that true positives that survive longer (i.e., those with the correct class and large IoU) are scored higher than false positives or detections with small IoU. In this setting, we use a bidirectional RNN with attention for contextual rescoring and introduce a training target that uses the IoU with ground truth to maximize AP for the given set of detections. The fact that our approach does not require access to visual features makes it computationally inexpensive and agnostic to the detection architecture. In spite of this simplicity, our model consistently improves AP over strong pre-trained baselines (Cascade R-CNN and Faster R-CNN with several backbones), particularly by reducing the confidence of duplicate detections (a learned form of non-maximum suppression) and removing out-of-context objects by conditioning on the confidences, classes, positions, and sizes of the co-occurrent detections. Code is available at https://github.com/LourencoVazPato/seeing-without-looking/ | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Pato_Seeing_without_Looking_Contextual_Rescoring_of_Object_Detections_for_AP_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.12290 | https://www.youtube.com/watch?v=f3q5IS2kGCw | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Pato_Seeing_without_Looking_Contextual_Rescoring_of_Object_Detections_for_AP_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Pato_Seeing_without_Looking_Contextual_Rescoring_of_Object_Detections_for_AP_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Pato_Seeing_without_Looking_CVPR_2020_supplemental.pdf | null | null |
Learning Geocentric Object Pose in Oblique Monocular Images | Gordon Christie, Rodrigo Rene Rai Munoz Abujder, Kevin Foster, Shea Hagstrom, Gregory D. Hager, Myron Z. Brown | An object's geocentric pose, defined as the height above ground and orientation with respect to gravity, is a powerful representation of real-world structure for object detection, segmentation, and localization tasks using RGBD images. For close-range vision tasks, height and orientation have been derived directly from stereo-computed depth and more recently from monocular depth predicted by deep networks. For long-range vision tasks such as Earth observation, depth cannot be reliably estimated with monocular images. Inspired by recent work in monocular height above ground prediction and optical flow prediction from static images, we develop an encoding of geocentric pose to address this challenge and train a deep network to compute the representation densely, supervised by publicly available airborne lidar. We exploit these attributes to rectify oblique images and remove observed object parallax to dramatically improve the accuracy of localization and to enable accurate alignment of multiple images taken from very different oblique viewpoints. We demonstrate the value of our approach by extending two large-scale public datasets for semantic segmentation in oblique satellite images. All of our data and code are publicly available. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Christie_Learning_Geocentric_Object_Pose_in_Oblique_Monocular_Images_CVPR_2020_paper.pdf | http://arxiv.org/abs/2007.00729 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Christie_Learning_Geocentric_Object_Pose_in_Oblique_Monocular_Images_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Christie_Learning_Geocentric_Object_Pose_in_Oblique_Monocular_Images_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
DPGN: Distribution Propagation Graph Network for Few-Shot Learning | Ling Yang, Liangliang Li, Zilun Zhang, Xinyu Zhou, Erjin Zhou, Yu Liu | Most graph-network-based meta-learning approaches model instance-level relation of examples. We extend this idea further to explicitly model the distribution-level relation of one example to all other examples in a 1-vs-N manner. We propose a novel approach named distribution propagation graph network (DPGN) for few-shot learning. It conveys both the distribution-level relations and instance-level relations in each few-shot learning task. To combine the distribution-level relations and instance-level relations for all examples, we construct a dual complete graph network which consists of a point graph and a distribution graph with each node standing for an example. Equipped with dual graph architecture, DPGN propagates label information from labeled examples to unlabeled examples within several update generations. In extensive experiments on few-shot learning benchmarks, DPGN outperforms state-of-the-art results by a large margin in 5% 12% under supervised setting and 7% 13% under semi-supervised setting. Code will be released. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_DPGN_Distribution_Propagation_Graph_Network_for_Few-Shot_Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.14247 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_DPGN_Distribution_Propagation_Graph_Network_for_Few-Shot_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_DPGN_Distribution_Propagation_Graph_Network_for_Few-Shot_Learning_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Cross-Domain Document Object Detection: Benchmark Suite and Method | Kai Li, Curtis Wigington, Chris Tensmeyer, Handong Zhao, Nikolaos Barmpalios, Vlad I. Morariu, Varun Manjunatha, Tong Sun, Yun Fu | Decomposing images of document pages into high-level semantic regions (e.g., figures, tables, paragraphs), document object detection (DOD) is fundamental for downstream tasks like intelligent document editing and understanding. DOD remains a challenging problem as document objects vary significantly in layout, size, aspect ratio, texture, etc. An additional challenge arises in practice because large labeled training datasets are only available for domains that differ from the target domain. We investigate cross-domain DOD, where the goal is to learn a detector for the target domain using labeled data from the source domain and only unlabeled data from the target domain. Documents from the two domains may vary significantly in layout, language, and genre. We establish a benchmark suite consisting of different types of PDF document datasets that can be utilized for cross-domain DOD model training and evaluation. For each dataset, we provide the page images, bounding box annotations, PDF files, and the rendering layers extracted from the PDF files. Moreover, we propose a novel cross-domain DOD model which builds upon the standard detection model and addresses domain shifts by incorporating three novel alignment modules: Feature Pyramid Alignment (FPA) module, Region Alignment (RA) module and Rendering Layer alignment (RLA) module. Extensive experiments on the benchmark suite substantiate the efficacy of the three proposed modules and the proposed method significantly outperforms the baseline methods. The project page is at https://github.com/kailigo/cddod. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Cross-Domain_Document_Object_Detection_Benchmark_Suite_and_Method_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.13197 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Cross-Domain_Document_Object_Detection_Benchmark_Suite_and_Method_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Cross-Domain_Document_Object_Detection_Benchmark_Suite_and_Method_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
On Translation Invariance in CNNs: Convolutional Layers Can Exploit Absolute Spatial Location | Osman Semih Kayhan, Jan C. van Gemert | In this paper we challenge the common assumption that convolutional layers in modern CNNs are translation invariant. We show that CNNs can and will exploit the absolute spatial location by learning filters that respond exclusively to particular absolute locations by exploiting image boundary effects. Because modern CNNs filters have a huge receptive field, these boundary effects operate even far from the image boundary, allowing the network to exploit absolute spatial location all over the image. We give a simple solution to remove spatial location encoding which improves translation invariance and thus gives a stronger visual inductive bias which particularly benefits small data sets. We broadly demonstrate these benefits on several architectures and various applications such as image classification, patch matching, and two video classification datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kayhan_On_Translation_Invariance_in_CNNs_Convolutional_Layers_Can_Exploit_Absolute_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.07064 | https://www.youtube.com/watch?v=tSSowWWKwog | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kayhan_On_Translation_Invariance_in_CNNs_Convolutional_Layers_Can_Exploit_Absolute_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kayhan_On_Translation_Invariance_in_CNNs_Convolutional_Layers_Can_Exploit_Absolute_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Classifying, Segmenting, and Tracking Object Instances in Video with Mask Propagation | Gedas Bertasius, Lorenzo Torresani | We introduce a method for simultaneously classifying, segmenting and tracking object instances in a video sequence. Our method, named MaskProp, adapts the popular Mask R-CNN to video by adding a mask propagation branch that propagates frame-level object instance masks from each video frame to all the other frames in a video clip. This allows our system to predict clip-level instance tracks with respect to the object instances segmented in the middle frame of the clip. Clip-level instance tracks generated densely for each frame in the sequence are finally aggregated to produce video-level object instance segmentation and classification. Our experiments demonstrate that our clip-level instance segmentation makes our approach robust to motion blur and object occlusions in video. MaskProp achieves the best reported accuracy on the YouTube-VIS dataset, outperforming the ICCV 2019 video instance segmentation challenge winner despite being much simpler and using orders of magnitude less labeled data (1.3M vs 1B images and 860K vs 14M bounding boxes). The project page is at: https://gberta.github.io/maskprop/. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Bertasius_Classifying_Segmenting_and_Tracking_Object_Instances_in_Video_with_Mask_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.04573 | https://www.youtube.com/watch?v=t2qpBg9neUc | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Bertasius_Classifying_Segmenting_and_Tracking_Object_Instances_in_Video_with_Mask_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Bertasius_Classifying_Segmenting_and_Tracking_Object_Instances_in_Video_with_Mask_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Bertasius_Classifying_Segmenting_and_CVPR_2020_supplemental.pdf | null | null |
WaveletStereo: Learning Wavelet Coefficients of Disparity Map in Stereo Matching | Menglong Yang, Fangrui Wu, Wei Li | Some stereo matching algorithms based on deep learning have been proposed and achieved state-of-the-art performances since some public large-scale datasets were put online. However, the disparity in smooth regions and detailed regions is still difficult to accurately estimate simultaneously. This paper proposes a novel stereo matching method called WaveletStereo, which learns the wavelet coefficients of the disparity rather than the disparity itself. The WaveletStereo consists of several sub-modules, where the low-frequency sub-module generates the low-frequency wavelet coefficients, which aims at learning global context information and well handling the low-frequency regions such as textureless surfaces, and the others focus on the details. In addition, a densely connected atrous spatial pyramid block is introduced for better learning the multi-scale image features. Experimental results show the effectiveness of the proposed method, which achieves state-of-the-art performance on the large-scale test dataset Scene Flow. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_WaveletStereo_Learning_Wavelet_Coefficients_of_Disparity_Map_in_Stereo_Matching_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=ymFZqpCqrfY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_WaveletStereo_Learning_Wavelet_Coefficients_of_Disparity_Map_in_Stereo_Matching_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_WaveletStereo_Learning_Wavelet_Coefficients_of_Disparity_Map_in_Stereo_Matching_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yang_WaveletStereo_Learning_Wavelet_CVPR_2020_supplemental.pdf | null | null |
Connect-and-Slice: An Hybrid Approach for Reconstructing 3D Objects | Hao Fang, Florent Lafarge | Converting point clouds generated by Laser scanning, multiview stereo imagery or depth cameras into compact polygon meshes is a challenging problem in vision. Existing methods are either robust to imperfect data or scalable, but rarely both. In this paper, we address this issue with an hybrid method that successively connects and slices planes detected from 3D data. The core idea consists in constructing an efficient and compact partitioning data structure. The later is i) spatially-adaptive in the sense that a plane slices a restricted number of relevant planes only, and ii) composed of components with different structural meaning resulting from a preliminary analysis of the plane connectivity. Our experiments on a variety of objects and sensors show the versatility of our approach as well as its competitiveness with respect to existing methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Fang_Connect-and-Slice_An_Hybrid_Approach_for_Reconstructing_3D_Objects_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Fang_Connect-and-Slice_An_Hybrid_Approach_for_Reconstructing_3D_Objects_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Fang_Connect-and-Slice_An_Hybrid_Approach_for_Reconstructing_3D_Objects_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Fang_Connect-and-Slice_An_Hybrid_CVPR_2020_supplemental.pdf | null | null |
Learning Instance Occlusion for Panoptic Segmentation | Justin Lazarow, Kwonjoon Lee, Kunyu Shi, Zhuowen Tu | Panoptic segmentation requires segments of both "things" (countable object instances) and "stuff" (uncountable and amorphous regions) within a single output. A common approach involves the fusion of instance segmentation (for "things") and semantic segmentation (for "stuff") into a non-overlapping placement of segments, and resolves overlaps. However, instance ordering with detection confidence do not correlate well with natural occlusion relationship. To resolve this issue, we propose a branch that is tasked with modeling how two instance masks should overlap one another as a binary relation. Our method, named OCFusion, is lightweight but particularly effective in the instance fusion process. OCFusion is trained with the ground truth relation derived automatically from the existing dataset annotations. We obtain state-of-the-art results on COCO and show competitive results on the Cityscapes panoptic segmentation benchmark. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lazarow_Learning_Instance_Occlusion_for_Panoptic_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/1906.05896 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lazarow_Learning_Instance_Occlusion_for_Panoptic_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lazarow_Learning_Instance_Occlusion_for_Panoptic_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Graph Embedded Pose Clustering for Anomaly Detection | Amir Markovitz, Gilad Sharir, Itamar Friedman, Lihi Zelnik-Manor, Shai Avidan | We propose a new method for anomaly detection of human actions. Our method works directly on human pose graphs that can be computed from an input video sequence. This makes the analysis independent of nuisance parameters such as viewpoint or illumination. We map these graphs to a latent space and cluster them. Each action is then represented by its soft-assignment to each of the clusters. This gives a kind of "bag of words" representation to the data, where every action is represented by its similarity to a group of base action-words. Then, we use a Dirichlet process based mixture, that is useful for handling proportional data such as our soft-assignment vectors, to determine if an action is normal or not. We evaluate our method on two types of data sets. The first is a fine-grained anomaly detection data set (e.g. ShanghaiTech) where we wish to detect unusual variations of some action. The second is a coarse-grained anomaly detection data set (e.g., a Kinetics-based data set) where few actions are considered normal, and every other action should be considered abnormal. Extensive experiments on the benchmarks show that our method1performs considerably better than other state of the art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Markovitz_Graph_Embedded_Pose_Clustering_for_Anomaly_Detection_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.11850 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Markovitz_Graph_Embedded_Pose_Clustering_for_Anomaly_Detection_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Markovitz_Graph_Embedded_Pose_Clustering_for_Anomaly_Detection_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Markovitz_Graph_Embedded_Pose_CVPR_2020_supplemental.pdf | null | null |
Graph Structured Network for Image-Text Matching | Chunxiao Liu, Zhendong Mao, Tianzhu Zhang, Hongtao Xie, Bin Wang, Yongdong Zhang | Image-text matching has received growing interest since it bridges vision and language. The key challenge lies in how to learn correspondence between image and text. Existing works learn coarse correspondence based on object co-occurrence statistics, while failing to learn fine-grained phrase correspondence. In this paper, we present a novel Graph Structured Matching Network (GSMN) to learn fine-grained correspondence. The GSMN explicitly models object, relation and attribute as a structured phrase, which not only allows to learn correspondence of object, relation and attribute separately, but also benefits to learn fine-grained correspondence of structured phrase. This is achieved by node-level matching and structure-level matching. The node-level matching associates each node with its relevant nodes from another modality, where the node can be object, relation or attribute. The associated nodes then jointly infer fine-grained correspondence by fusing neighborhood associations at structure-level matching. Comprehensive experiments show that GSMN outperforms state-of-the-art methods on benchmarks, with relative Recall@1 improvements of nearly 7% and 2% on Flickr30K and MSCOCO, respectively. Code will be released at: https://github.com/CrossmodalGroup/GSMN. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Graph_Structured_Network_for_Image-Text_Matching_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.00277 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Graph_Structured_Network_for_Image-Text_Matching_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Graph_Structured_Network_for_Image-Text_Matching_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
BFBox: Searching Face-Appropriate Backbone and Feature Pyramid Network for Face Detector | Yang Liu, Xu Tang | Popular backbones designed on image classification have demonstrated their considerable compatibility on the task of general object detection. However, the same phenomenon does not appear on the face detection. This is largely due to the average scale of ground-truth in the WiderFace dataset is far smaller than that of generic objects in theCOCO one. To resolve this, the success of Neural Archi-tecture Search (NAS) inspires us to search face-appropriate backbone and featrue pyramid network (FPN) architecture.Firstly, we design the search space for backbone and FPN by comparing performance of feature maps with different backbones and excellent FPN architectures on the face detection. Second, we propose a FPN-attention module to joint search the architecture of backbone and FPN. Finally,we conduct comprehensive experiments on popular bench-marks, including Wider Face, FDDB, AFW and PASCALFace, display the superiority of our proposed method. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_BFBox_Searching_Face-Appropriate_Backbone_and_Feature_Pyramid_Network_for_Face_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=NGujbubWmOA | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_BFBox_Searching_Face-Appropriate_Backbone_and_Feature_Pyramid_Network_for_Face_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_BFBox_Searching_Face-Appropriate_Backbone_and_Feature_Pyramid_Network_for_Face_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
End-to-End Adversarial-Attention Network for Multi-Modal Clustering | Runwu Zhou, Yi-Dong Shen | Multi-modal clustering aims to cluster data into different groups by exploring complementary information from multiple modalities or views. Little work learns the deep fused representations and simutaneously discovers the cluster structure with a discriminative loss. In this paper, we present an End-to-end Adversarial-attention network for Multi-modal Clustering (EAMC), where adversarial learning and attention mechanism are leveraged to align the latent feature distributions and quantify the importance of modalities respectively. To benefit from the joint training, we introducea divergence-based clustering objective that not only encourages the separation and compactness of the clusters but also enjoy a clear cluster structure by embedding the simplex geometry of the output space into the loss. The proposed network consists of modality-specific feature learning, modality fusion and cluster assignment three modules. It can be trained from scratch with batch-mode based optimization and avoid an autoencoder pretraining stage. Comprehensive experiments conducted on five real-world datasets show the superiority and effectiveness of the proposed clustering method. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhou_End-to-End_Adversarial-Attention_Network_for_Multi-Modal_Clustering_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_End-to-End_Adversarial-Attention_Network_for_Multi-Modal_Clustering_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_End-to-End_Adversarial-Attention_Network_for_Multi-Modal_Clustering_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
What You See is What You Get: Exploiting Visibility for 3D Object Detection | Peiyun Hu, Jason Ziglar, David Held, Deva Ramanan | Recent advances in 3D sensing have created unique challenges for computer vision. One fundamental challenge is finding a good representation for 3D sensor data. Most popular representations (such as PointNet) are proposed in the context of processing truly 3D data (e.g. points sampled from mesh models), ignoring the fact that 3D sensored data such as a LiDAR sweep is in fact 2.5D. We argue that representing 2.5D data as collections of (x,y,z) points fundamentally destroys hidden information about freespace. In this paper, we demonstrate such knowledge can be efficiently recovered through 3D raycasting and readily incorporated into batch-based gradient learning. We describe a simple approach to augmenting voxel-based networks with visibility: we add a voxelized visibility map as an additional input stream. In addition, we show that visibility can be combined with two crucial modifications common to state-of-the-art 3D detectors: synthetic data augmentation of virtual objects and temporal aggregation of LiDAR sweeps over multiple time frames. On the NuScenes 3D detection benchmark, we show that, by adding an additional stream for visibility input, we can significantly improve the overall detection accuracy of a state-of-the-art 3D detector. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hu_What_You_See_is_What_You_Get_Exploiting_Visibility_for_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.04986 | https://www.youtube.com/watch?v=EXYzi-ATzRE | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Hu_What_You_See_is_What_You_Get_Exploiting_Visibility_for_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Hu_What_You_See_is_What_You_Get_Exploiting_Visibility_for_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Hu_What_You_See_CVPR_2020_supplemental.pdf | null | null |
Visual Grounding in Video for Unsupervised Word Translation | Gunnar A. Sigurdsson, Jean-Baptiste Alayrac, Aida Nematzadeh, Lucas Smaira, Mateusz Malinowski, Joao Carreira, Phil Blunsom, Andrew Zisserman | There are thousands of actively spoken languages on Earth, but a single visual world. Grounding in this visual world has the potential to bridge the gap between all these languages. Our goal is to use visual grounding to improve unsupervised word mapping between languages. The key idea is to establish a common visual representation between two languages by learning embeddings from unpaired instructional videos narrated in the native language. Given this shared embedding we demonstrate that (i) we can map words between the languages, particularly the 'visual' words; (ii) that the shared embedding provides a good initialization for existing unsupervised text-based word translation techniques, forming the basis for our proposed hybrid visual-text mapping algorithm, MUVE; and (iii) our approach achieves superior performance by addressing the shortcomings of text-based methods -- it is more robust, handles datasets with less commonality, and is applicable to low-resource languages. We apply these methods to translate words from English to French, Korean, and Japanese -- all without any parallel corpora and simply by watching many videos of people speaking while doing things. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Sigurdsson_Visual_Grounding_in_Video_for_Unsupervised_Word_Translation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.05078 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Sigurdsson_Visual_Grounding_in_Video_for_Unsupervised_Word_Translation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Sigurdsson_Visual_Grounding_in_Video_for_Unsupervised_Word_Translation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Sigurdsson_Visual_Grounding_in_CVPR_2020_supplemental.pdf | null | null |
Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis | K R Prajwal, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C.V. Jawahar | Humans involuntarily tend to infer parts of the conversation from lip movements when the speech is absent or corrupted by external noise. In this work, we explore the task of lip to speech synthesis, i.e., learning to generate natural speech given only the lip movements of a speaker. Acknowledging the importance of contextual and speaker-specific cues for accurate lip-reading, we take a different path from existing works. We focus on learning accurate lip sequences to speech mappings for individual speakers in unconstrained, large vocabulary settings. To this end, we collect and release a large-scale benchmark dataset, the first of its kind, specifically to train and evaluate the single-speaker lip to speech task in natural settings. We propose a novel approach with key design choices to achieve accurate, natural lip to speech synthesis in such unconstrained scenarios for the first time. Extensive evaluation using quantitative, qualitative metrics and human evaluation shows that our method is four times more intelligible than previous works in this space. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Prajwal_Learning_Individual_Speaking_Styles_for_Accurate_Lip_to_Speech_Synthesis_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.08209 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Prajwal_Learning_Individual_Speaking_Styles_for_Accurate_Lip_to_Speech_Synthesis_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Prajwal_Learning_Individual_Speaking_Styles_for_Accurate_Lip_to_Speech_Synthesis_CVPR_2020_paper.html | CVPR 2020 | null | https://cove.thecvf.com/datasets/363 | null |
Adversarial Latent Autoencoders | Stanislav Pidhorskyi, Donald A. Adjeroh, Gianfranco Doretto | Autoencoder networks are unsupervised approaches aiming at combining generative and representational properties by learning simultaneously an encoder-generator map. Although studied extensively, the issues of whether they have the same generative power of GANs, or learn disentangled representations, have not been fully addressed. We introduce an autoencoder that tackles these issues jointly, which we call Adversarial Latent Autoencoder (ALAE). It is a general architecture that can leverage recent improvements on GAN training procedures. We designed two autoencoders: one based on a MLP encoder, and another based on a StyleGAN generator, which we call StyleALAE. We verify the disentanglement properties of both architectures. We show that StyleALAE can not only generate 1024x1024 face images with comparable quality of StyleGAN, but at the same resolution can also produce face reconstructions and manipulations based on real images. This makes ALAE the first autoencoder able to compare with, and go beyond the capabilities of a generator-only type of architecture. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Pidhorskyi_Adversarial_Latent_Autoencoders_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.04467 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Pidhorskyi_Adversarial_Latent_Autoencoders_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Pidhorskyi_Adversarial_Latent_Autoencoders_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.