Search is not available for this dataset
title
string | authors
string | abstract
string | pdf
string | arXiv
string | video
string | bibtex
string | url
string | detail_url
string | tags
string | supp
string | dataset
string | string |
---|---|---|---|---|---|---|---|---|---|---|---|---|
CycleISP: Real Image Restoration via Improved Data Synthesis | Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, Ling Shao | The availability of large-scale datasets has helped unleash the true potential of deep convolutional neural networks (CNNs). However, for the single-image denoising problem, capturing a real dataset is an unacceptably expensive and cumbersome procedure. Consequently, image denoising algorithms are mostly developed and evaluated on synthetic data that is usually generated with a widespread assumption of additive white Gaussian noise (AWGN). While the CNNs achieve impressive results on these synthetic datasets, they do not perform well when applied on real camera images, as reported in recent benchmark datasets. This is mainly because the AWGN is not adequate for modeling the real camera noise which is signal-dependent and heavily transformed by the camera imaging pipeline. In this paper, we present a framework that models camera imaging pipeline in forward and reverse directions. It allows us to produce any number of realistic image pairs for denoising both in RAW and sRGB spaces. By training a new image denoising network on realistic synthetic data, we achieve the state-of-the-art performance on real camera benchmark datasets. The parameters in our models are 5 times lesser than the previous best method for RAW denoising. Furthermore, we demonstrate that the proposed framework generalizes beyond image denoising problem e.g., for color matching in stereoscopic cinema. The source code and pre-trained models are available at https://github.com/swz30/CycleISP. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zamir_CycleISP_Real_Image_Restoration_via_Improved_Data_Synthesis_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.07761 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zamir_CycleISP_Real_Image_Restoration_via_Improved_Data_Synthesis_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zamir_CycleISP_Real_Image_Restoration_via_Improved_Data_Synthesis_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zamir_CycleISP_Real_Image_CVPR_2020_supplemental.pdf | null | null |
Neural Network Pruning With Residual-Connections and Limited-Data | Jian-Hao Luo, Jianxin Wu | Filter level pruning is an effective method to accelerate the inference speed of deep CNN models. Although numerous pruning algorithms have been proposed, there are still two open issues. The first problem is how to prune residual connections. We propose to prune both channels inside and outside the residual connections via a KL-divergence based criterion. The second issue is pruning with limited data. We observe an interesting phenomenon: directly pruning on a small dataset is usually worse than fine-tuning a small model which is pruned or trained from scratch on the large dataset. Knowledge distillation is an effective approach to compensate for the weakness of limited data. However, the logits of a teacher model may be noisy. In order to avoid the influence of label noise, we propose a label refinement approach to solve this problem. Experiments have demonstrated the effectiveness of our method (CURL, Compression Using Residual-connections and Limited-data). CURL significantly outperforms previous state-of-the-art methods on ImageNet. More importantly, when pruning on small datasets, CURL achieves comparable or much better performance than fine-tuning a pretrained small model. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Luo_Neural_Network_Pruning_With_Residual-Connections_and_Limited-Data_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.08114 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Luo_Neural_Network_Pruning_With_Residual-Connections_and_Limited-Data_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Luo_Neural_Network_Pruning_With_Residual-Connections_and_Limited-Data_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Neural Cages for Detail-Preserving 3D Deformations | Wang Yifan, Noam Aigerman, Vladimir G. Kim, Siddhartha Chaudhuri, Olga Sorkine-Hornung | We propose a novel learnable representation for detail preserving shape deformation. The goal of our method is to warp a source shape to match the general structure of a target shape, while preserving the surface details of the source. Our method extends a traditional cage-based deformation technique, where the source shape is enclosed by a coarse control mesh termed cage, and translations prescribed on the cage vertices are interpolated to any point on the source mesh via special weight functions. The use of this sparse cage scaffolding enables preserving surface details regardless of the shape's intricacy and topology. Our key contribution is a novel neural network architecture for predicting deformations by controlling the cage. We incorporate a differentiable cage-based deformation module in our architecture, and train our network end-to-end. Our method can be trained with common collections of 3D models in an unsupervised fashion, without any cage-specific annotations. We demonstrate the utility of our method for synthesizing shape variations and deformation transfer. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yifan_Neural_Cages_for_Detail-Preserving_3D_Deformations_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.06395 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yifan_Neural_Cages_for_Detail-Preserving_3D_Deformations_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yifan_Neural_Cages_for_Detail-Preserving_3D_Deformations_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yifan_Neural_Cages_for_CVPR_2020_supplemental.pdf | null | null |
Fashion Outfit Complementary Item Retrieval | Yen-Liang Lin, Son Tran, Larry S. Davis | Complementary fashion item recommendation is critical for fashion outfit completion. Existing methods mainly focus on outfit compatibility prediction but not in a retrieval setting. We propose a new framework for outfit complementary item retrieval. Specifically, a category-based subspace attention network is presented, which is a scalable approach for learning the subspace attentions. In addition, we introduce an outfit ranking loss that better models the item relationships of an entire outfit. We evaluate our method on the outfit compatibility, FITB and new retrieval tasks. Experimental results demonstrate that our approach outperforms state-of-the-art methods in both compatibility prediction and complementary item retrieval. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lin_Fashion_Outfit_Complementary_Item_Retrieval_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.08967 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Fashion_Outfit_Complementary_Item_Retrieval_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Fashion_Outfit_Complementary_Item_Retrieval_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
SCT: Set Constrained Temporal Transformer for Set Supervised Action Segmentation | Mohsen Fayyaz, Jurgen Gall | Temporal action segmentation is a topic of increasing interest, however, annotating each frame in a video is cumbersome and costly. Weakly supervised approaches therefore aim at learning temporal action segmentation from videos that are only weakly labeled. In this work, we assume that for each training video only the list of actions is given that occur in the video, but not when, how often, and in which order they occur. In order to address this task, we propose an approach that can be trained end-to-end on such data. The approach divides the video into smaller temporal regions and predicts for each region the action label and its length. In addition, the network estimates the action labels for each frame. By measuring how consistent the frame-wise predictions are with respect to the temporal regions and the annotated action labels, the network learns to divide a video into class-consistent regions. We evaluate our approach on three datasets where the approach achieves state-of-the-art results. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Fayyaz_SCT_Set_Constrained_Temporal_Transformer_for_Set_Supervised_Action_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.14266 | https://www.youtube.com/watch?v=OOJfklMtTWg | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Fayyaz_SCT_Set_Constrained_Temporal_Transformer_for_Set_Supervised_Action_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Fayyaz_SCT_Set_Constrained_Temporal_Transformer_for_Set_Supervised_Action_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
CPR-GCN: Conditional Partial-Residual Graph Convolutional Network in Automated Anatomical Labeling of Coronary Arteries | Han Yang, Xingjian Zhen, Ying Chi, Lei Zhang, Xian-Sheng Hua | Automated anatomical labeling plays a vital role in coronary artery disease diagnosing procedure. The main challenge in this problem is the large individual variability inherited in human anatomy. Existing methods usually rely on the position information and the prior knowledge of the topology of the coronary artery tree, which may lead to unsatisfactory performance when the main branches are confusing. Motivated by the wide application of the graph neural network in structured data, in this paper, we propose a conditional partial-residual graph convolutional network (CPR-GCN), which takes both position and CT image into consideration, since CT image contains abundant information such as branch size and spanning direction. Two majority parts, a Partial-Residual GCN and a conditions extractor, are included in CPR-GCN. The conditions extractor is a hybrid model containing the 3D CNN and the LSTM, which can extract 3D spatial image features along the branches. On the technical side, the Partial-Residual GCN takes the position features of the branches, with the 3D spatial image features as conditions, to predict the label for each branches. While on the mathematical side, our approach twists the partial differential equation (PDE) into the graph modeling. A dataset with 511 subjects is collected from the clinic and annotated by two experts with a two-phase annotation process. According to the five-fold cross-validation, our CPR-GCN yields 95.8% meanRecall, 95.4% meanPrecision and 0.955 meanF1, which outperforms state-of-the-art approaches. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_CPR-GCN_Conditional_Partial-Residual_Graph_Convolutional_Network_in_Automated_Anatomical_Labeling_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=3YssKe2_h6o | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_CPR-GCN_Conditional_Partial-Residual_Graph_Convolutional_Network_in_Automated_Anatomical_Labeling_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_CPR-GCN_Conditional_Partial-Residual_Graph_Convolutional_Network_in_Automated_Anatomical_Labeling_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Neural Blind Deconvolution Using Deep Priors | Dongwei Ren, Kai Zhang, Qilong Wang, Qinghua Hu, Wangmeng Zuo | Blind deconvolution is a classical yet challenging low-level vision problem with many real-world applications. Traditional maximum a posterior (MAP) based methods rely heavily on fixed and handcrafted priors that certainly are insufficient in characterizing clean images and blur kernels, and usually adopt specially designed alternating minimization to avoid trivial solution. In contrast, existing deep motion deblurring networks learn from massive training images the mapping to clean image or blur kernel, but are limited in handling various complex and large size blur kernels. To connect MAP and deep models, we in this paper present two generative networks for respectively modeling the deep priors of clean image and blur kernel, and propose an unconstrained neural optimization solution to blind deconvolution. In particular, we adopt an asymmetric Autoencoder with skip connections for generating latent clean image, and a fully-connected network (FCN) for generating blur kernel. Moreover, the SoftMax nonlinearity is applied to the output layer of FCN to meet the non-negative and equality constraints. The process of neural optimization can be explained as a kind of "zero-shot" self-supervised learning of the generative networks, and thus our proposed method is dubbed SelfDeblur. Experimental results show that our SelfDeblur can achieve notable quantitative gains as well as more visually plausible deblurring results in comparison to state-of-the-art blind deconvolution methods on benchmark datasets and real-world blurry images. The source code is publicly available at https://github.com/csdwren/SelfDeblur | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Ren_Neural_Blind_Deconvolution_Using_Deep_Priors_CVPR_2020_paper.pdf | http://arxiv.org/abs/1908.02197 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Ren_Neural_Blind_Deconvolution_Using_Deep_Priors_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Ren_Neural_Blind_Deconvolution_Using_Deep_Priors_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Ren_Neural_Blind_Deconvolution_CVPR_2020_supplemental.pdf | null | null |
3D Packing for Self-Supervised Monocular Depth Estimation | Vitor Guizilini, Rares Ambrus, Sudeep Pillai, Allan Raventos, Adrien Gaidon | Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception. In this work, we propose a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos. Our architecture leverages novel symmetrical packing and unpacking blocks to jointly learn to compress and decompress detail-preserving representations using 3D convolutions. Although self-supervised, our method outperforms other self, semi, and fully supervised methods on the KITTI benchmark. The 3D inductive bias in PackNet enables it to scale with input resolution and number of parameters without overfitting, generalizing better on out-of-domain data such as the NuScenes dataset. Furthermore, it does not require large-scale supervised pretraining on ImageNet and can run in real-time. Finally, we release DDAD (Dense Depth for Automated Driving), a new urban driving dataset with more challenging and accurate depth evaluation, thanks to longer-range and denser ground-truth depth generated from high-density LiDARs mounted on a fleet of self-driving cars operating world-wide. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Guizilini_3D_Packing_for_Self-Supervised_Monocular_Depth_Estimation_CVPR_2020_paper.pdf | http://arxiv.org/abs/1905.02693 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Guizilini_3D_Packing_for_Self-Supervised_Monocular_Depth_Estimation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Guizilini_3D_Packing_for_Self-Supervised_Monocular_Depth_Estimation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Guizilini_3D_Packing_for_CVPR_2020_supplemental.zip | null | null |
Local Non-Rigid Structure-From-Motion From Diffeomorphic Mappings | Shaifali Parashar, Mathieu Salzmann, Pascal Fua | We propose a new formulation to non-rigid structure-from-motion that only requires the deforming surface to preserve its differential structure. This is a much weaker assumption than the traditional ones of isometry or conformality. We show that it is nevertheless sufficient to establish local correspondences between the surface in two different images and therefore to perform point-wise reconstruction using only first-order derivatives. To this end, we formulate differential constraints and solve them algebraically using the theory of resultants. We will demonstrate that our approach is more widely applicable, more stable in noisy and sparse imaging conditions and much faster than earlier ones, while delivering similar accuracy. The code is available at https://github.com/cvlab-epfl/diff-nrsfm/. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Parashar_Local_Non-Rigid_Structure-From-Motion_From_Diffeomorphic_Mappings_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Parashar_Local_Non-Rigid_Structure-From-Motion_From_Diffeomorphic_Mappings_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Parashar_Local_Non-Rigid_Structure-From-Motion_From_Diffeomorphic_Mappings_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Parashar_Local_Non-Rigid_Structure-From-Motion_CVPR_2020_supplemental.pdf | null | null |
Structure Preserving Generative Cross-Domain Learning | Haifeng Xia, Zhengming Ding | Unsupervised domain adaptation (UDA) casts a light when dealing with insufficient or no labeled data in the target domain by exploring the well-annotated source knowledge in different distributions. Most research efforts on UDA explore to seek a domain-invariant classifier over source supervision. However, due to the scarcity of label information in the target domain, such a classifier has a lack of ground-truth target supervision, which dramatically obstructs the robustness and discrimination of the classifier. To this end, we develop a novel Generative cross-domain learning via Structure-Preserving (GSP), which attempts to transform target data into the source domain in order to take advantage of source supervision. Specifically, a novel cross-domain graph alignment is developed to capture the intrinsic relationship across two domains during target-source translation. Simultaneously, two distinct classifiers are trained to trigger the domain-invariant feature learning both guided with source supervision, one is a traditional source classifier and the other is a source-supervised target classifier. Extensive experimental results on several cross-domain visual benchmarks have demonstrated the effectiveness of our model by comparing with other state-of-the-art UDA algorithms. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xia_Structure_Preserving_Generative_Cross-Domain_Learning_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xia_Structure_Preserving_Generative_Cross-Domain_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xia_Structure_Preserving_Generative_Cross-Domain_Learning_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Generative Hybrid Representations for Activity Forecasting With No-Regret Learning | Jiaqi Guan, Ye Yuan, Kris M. Kitani, Nicholas Rhinehart | Automatically reasoning about future human behaviors is a difficult problem but has significant practical applications to assistive systems. Part of this difficulty stems from learning systems' inability to represent all kinds of behaviors. Some behaviors, such as motion, are best described with continuous representations, whereas others, such as picking up a cup, are best described with discrete representations. Furthermore, human behavior is generally not fixed: people can change their habits and routines. This suggests these systems must be able to learn and adapt continuously. In this work, we develop an efficient deep generative model to jointly forecast a person's future discrete actions and continuous motions. On a large-scale egocentric dataset, EPIC-KITCHENS, we observe our method generates high-quality and diverse samples while exhibiting better generalization than related generative models. Finally, we propose a variant to continually learn our model from streaming data, observe its practical effectiveness, and theoretically justify its learning efficiency. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Guan_Generative_Hybrid_Representations_for_Activity_Forecasting_With_No-Regret_Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/1904.06250 | https://www.youtube.com/watch?v=qVYdCuSzIIo | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Guan_Generative_Hybrid_Representations_for_Activity_Forecasting_With_No-Regret_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Guan_Generative_Hybrid_Representations_for_Activity_Forecasting_With_No-Regret_Learning_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Guan_Generative_Hybrid_Representations_CVPR_2020_supplemental.pdf | null | null |
Predicting Cognitive Declines Using Longitudinally Enriched Representations for Imaging Biomarkers | Lyujian Lu, Hua Wang, Saad Elbeleidy, Feiping Nie | With rapid progress in high-throughput genotyping and neuroimaging, researches of complex brain disorders, such as Alzheimer's Disease (AD), have gained significant attention in recent years. Many prediction models have been studied to relate neuroimaging measures to cognitive status over the progressions when these disease develops. Missing data is one of the biggest challenge in accurate cognitive score prediction of subjects in longitudinal neuroimaging studies. To tackle this problem, in this paper we propose a novel formulation to learn an enriched representation for imaging biomarkers that can simultaneously capture both the information conveyed by baseline neuroimaging records and that by progressive variations of varied counts of available follow-up records over time. While the numbers of the brain scans of the participants vary, the learned biomarker representation for every participant is a fixed-length vector, which enable us to use traditional learning models to study AD developments. Our new objective is formulated to maximize the ratio of the summations of a number of L1-norm distances for improved robustness, which, though, is difficult to efficiently solve in general. Thus we derive a new efficient iterative solution algorithm and rigorously prove its convergence. We have performed extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. A performance gain has been achieved to predict four different cognitive scores, when we compare the original baseline representations against the learned representations with enrichments. These promising empirical results have demonstrated improved performances of our new method that validate its effectiveness. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lu_Predicting_Cognitive_Declines_Using_Longitudinally_Enriched_Representations_for_Imaging_Biomarkers_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=qqaDTRq28o8 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Predicting_Cognitive_Declines_Using_Longitudinally_Enriched_Representations_for_Imaging_Biomarkers_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Predicting_Cognitive_Declines_Using_Longitudinally_Enriched_Representations_for_Imaging_Biomarkers_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
3D Sketch-Aware Semantic Scene Completion via Semi-Supervised Structure Prior | Xiaokang Chen, Kwan-Yee Lin, Chen Qian, Gang Zeng, Hongsheng Li | The goal of the Semantic Scene Completion (SSC) task is to simultaneously predict a completed 3D voxel representation of volumetric occupancy and semantic labels of objects in the scene from a single-view observation. Since the computational cost generally increases explosively along with the growth of voxel resolution, most current state-of-the-arts have to tailor their framework into a low-resolution representation with the sacrifice of detail prediction. Thus, voxel resolution becomes one of the crucial difficulties that lead to the performance bottleneck. In this paper, we propose to devise a new geometry-based strategy to embed depth information with low-resolution voxel representation, which could still be able to encode sufficient geometric information, e.g., room layout, object's sizes and shapes, to infer the invisible areas of the scene with well structure-preserving details. To this end, we first propose a novel 3D sketch-aware feature embedding to explicitly encode geometric information effectively and efficiently. With the 3D sketch in hand, we further devise a simple yet effective semantic scene completion framework that incorporates a light-weight 3D Sketch Hallucination module to guide the inference of occupancy and the semantic labels via a semi-supervised structure prior learning strategy. We demonstrate that our proposed geometric embedding works better than the depth feature learning from habitual SSC frameworks. Our final model surpasses state- of-the-arts consistently on three public benchmarks, which only requires 3D volumes of 60 x 36 x 60 resolution for both input and output. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chen_3D_Sketch-Aware_Semantic_Scene_Completion_via_Semi-Supervised_Structure_Prior_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.14052 | https://www.youtube.com/watch?v=5S3KfwuUFdo | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_3D_Sketch-Aware_Semantic_Scene_Completion_via_Semi-Supervised_Structure_Prior_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_3D_Sketch-Aware_Semantic_Scene_Completion_via_Semi-Supervised_Structure_Prior_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Chen_3D_Sketch-Aware_Semantic_CVPR_2020_supplemental.zip | null | null |
Progressive Mirror Detection | Jiaying Lin, Guodong Wang, Rynson W.H. Lau | The mirror detection problem is important as mirrors can affect the performances of many vision tasks. It is a difficult problem as it requires an understanding of global scene semantics. Recently, a method was proposed to detect mirrors by learning multi-level contextual contrasts between inside and outside of mirrors, which helps locate mirror edges implicitly. We observe that the content of a mirror reflects the content of its surrounding, separated by the edge of the mirror. Hence, we propose a model in this paper to progressively learn the content similarity between the inside and outside of the mirror while explicitly detecting the mirror edges. Our work has two main contributions. First, we propose a new relational contextual contrasted local (RCCL) module to extract and compare the mirror features with its corresponding context features, and an edge detection and fusion (EDF) module to learn the features of mirror edges in complex scenes via explicit supervision. Second, we construct a challenging benchmark dataset of 6,461 mirror images. Unlike the existing MSD dataset, which has limited diversity, our dataset covers a variety of scenes and is much larger in scale. Experimental results show that our model outperforms relevant state-of-the-art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lin_Progressive_Mirror_Detection_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Progressive_Mirror_Detection_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Progressive_Mirror_Detection_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Lin_Progressive_Mirror_Detection_CVPR_2020_supplemental.pdf | null | null |
FOAL: Fast Online Adaptive Learning for Cardiac Motion Estimation | Hanchao Yu, Shanhui Sun, Haichao Yu, Xiao Chen, Honghui Shi, Thomas S. Huang, Terrence Chen | Motion estimation of cardiac MRI videos is crucial for the evaluation of human heart anatomy and function. Recent researches show promising results with deep learning-based methods. In clinical deployment, however, they suffer dramatic performance drops due to mismatched distributions between training and testing datasets, commonly encountered in the clinical environment. On the other hand, it is arguably impossible to collect all representative datasets and to train a universal tracker before deployment. In this context, we proposed a novel fast online adaptive learning (FOAL) framework: an online gradient descent based optimizer that is optimized by a meta-learner. The meta-learner enables the online optimizer to perform a fast and robust adaptation. We evaluated our method through extensive experiments on two public clinical datasets. The results showed the superior performance of FOAL in accuracy compared to the offline-trained tracking method. On average, the FOAL took only 0.4 second per video for online optimization. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yu_FOAL_Fast_Online_Adaptive_Learning_for_Cardiac_Motion_Estimation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.04492 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_FOAL_Fast_Online_Adaptive_Learning_for_Cardiac_Motion_Estimation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_FOAL_Fast_Online_Adaptive_Learning_for_Cardiac_Motion_Estimation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Don't Hit Me! Glass Detection in Real-World Scenes | Haiyang Mei, Xin Yang, Yang Wang, Yuanyuan Liu, Shengfeng He, Qiang Zhang, Xiaopeng Wei, Rynson W.H. Lau | Glass is very common in our daily life. Existing computer vision systems neglect it and thus may have severe consequences, e.g., a robot may crash into a glass wall. However, sensing the presence of glass is not straightforward. The key challenge is that arbitrary objects/scenes can appear behind the glass, and the content within the glass region is typically similar to those behind it. In this paper, we propose an important problem of detecting glass from a single RGB image. To address this problem, we construct a large-scale glass detection dataset (GDD) and design a glass detection network, called GDNet, which explores abundant contextual cues for robust glass detection with a novel large-field contextual feature integration (LCFI) module. Extensive experiments demonstrate that the proposed method achieves more superior glass detection results on our GDD test set than state-of-the-art methods fine-tuned for glass detection. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Mei_Dont_Hit_Me_Glass_Detection_in_Real-World_Scenes_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Mei_Dont_Hit_Me_Glass_Detection_in_Real-World_Scenes_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Mei_Dont_Hit_Me_Glass_Detection_in_Real-World_Scenes_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Mei_Dont_Hit_Me_CVPR_2020_supplemental.pdf | null | null |
Deep Grouping Model for Unified Perceptual Parsing | Zhiheng Li, Wenxuan Bao, Jiayang Zheng, Chenliang Xu | The perceptual-based grouping process produces a hierarchical and compositional image representation that helps both human and machine vision systems recognize heterogeneous visual concepts. Examples can be found in the classical hierarchical superpixel segmentation or image parsing works. However, the grouping process is largely overlooked in modern CNN-based image segmentation networks due to many challenges, including the inherent incompatibility between the grid-shaped CNN feature map and the irregular-shaped perceptual grouping hierarchy. Overcoming these challenges, we propose a deep grouping model (DGM) that tightly marries the two types of representations and defines a bottom-up and a top-down process for feature exchanging. When evaluating the model on the recent Broden+ dataset for the unified perceptual parsing task, it achieves state-of-the-art results while having a small computational overhead compared to other contextual-based segmentation models. Furthermore, the DGM has better interpretability compared with modern CNN methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Deep_Grouping_Model_for_Unified_Perceptual_Parsing_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.11647 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Deep_Grouping_Model_for_Unified_Perceptual_Parsing_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Deep_Grouping_Model_for_Unified_Perceptual_Parsing_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Deep_Grouping_Model_CVPR_2020_supplemental.pdf | null | null |
Inter-Task Association Critic for Cross-Resolution Person Re-Identification | Zhiyi Cheng, Qi Dong, Shaogang Gong, Xiatian Zhu | Person images captured by unconstrained surveillance cameras often have low resolutions (LR). This causes the resolution mismatch problem when matched against the high-resolution (HR) gallery images, negatively affecting the performance of person re-identification (re-id). An effective approach is to leverage image super-resolution (SR) along with person re-id in a joint learning manner. However, this scheme is limited due to dramatically more difficult gradients backpropagation during training. In this paper, we introduce a novel model training regularisation method, called Inter-Task Association Critic (INTACT), to address this fundamental problem. Specifically, INTACT discovers the underlying association knowledge between image SR and person re-id, and leverages it as an extra learning constraint for enhancing the compatibility of SR model with person re-id in HR image space. This is realised by parameterising the association constraint which enables it to be automatically learned from the training data. Extensive experiments validate the superiority of INTACT over the state-of-the-art approaches on the cross-resolution re-id task using five standard person re-id datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Cheng_Inter-Task_Association_Critic_for_Cross-Resolution_Person_Re-Identification_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=wCsuJbwd9zw | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Cheng_Inter-Task_Association_Critic_for_Cross-Resolution_Person_Re-Identification_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Cheng_Inter-Task_Association_Critic_for_Cross-Resolution_Person_Re-Identification_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
SOS: Selective Objective Switch for Rapid Immunofluorescence Whole Slide Image Classification | Sam Maksoud, Kun Zhao, Peter Hobson, Anthony Jennings, Brian C. Lovell | The difficulty of processing gigapixel whole slide images (WSIs) in clinical microscopy has been a long-standing barrier to implementing computer aided diagnostic systems. Since modern computing resources are unable to perform computations at this extremely large scale, current state of the art methods utilize patch-based processing to preserve the resolution of WSIs. However, these methods are often resource intensive and make significant compromises on processing time. In this paper, we demonstrate that conventional patch-based processing is redundant for certain WSI classification tasks where high resolution is only required in a minority of cases. This reflects what is observed in clinical practice; where a pathologist may screen slides using a low power objective and only switch to a high power in cases where they are uncertain about their findings. To eliminate these redundancies, we propose a method for the selective use of high resolution processing based on the confidence of predictions on downscaled WSIs --- we call this the Selective Objective Switch (SOS). Our method is validated on a novel dataset of 684 Liver-Kidney-Stomach immunofluorescence WSIs routinely used in the investigation of autoimmune liver disease. By limiting high resolution processing to cases which cannot be classified confidently at low resolution, we maintain the accuracy of patch-level analysis whilst reducing the inference time by a factor of 7.74. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Maksoud_SOS_Selective_Objective_Switch_for_Rapid_Immunofluorescence_Whole_Slide_Image_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.05080 | https://www.youtube.com/watch?v=RuoNHzsqOnM | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Maksoud_SOS_Selective_Objective_Switch_for_Rapid_Immunofluorescence_Whole_Slide_Image_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Maksoud_SOS_Selective_Objective_Switch_for_Rapid_Immunofluorescence_Whole_Slide_Image_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification | Yichao Yan, Jie Qin, Jiaxin Chen, Li Liu, Fan Zhu, Ying Tai, Ling Shao | Video-based person re-identification (re-ID) is an important research topic in computer vision. The key to tackling the challenging task is to exploit both spatial and temporal clues in video sequences. In this work, we propose a novel graph-based framework, namely Multi-Granular Hypergraph (MGH), to pursue better representational capabilities by modeling spatiotemporal dependencies in terms of multiple granularities. Specifically, hypergraphs with different spatial granularities are constructed using various levels of part-based features across the video sequence. In each hypergraph, different temporal granularities are captured by hyperedges that connect a set of graph nodes (i.e., part-based features) across different temporal ranges. Two critical issues (misalignment and occlusion) are explicitly addressed by the proposed hypergraph propagation and feature aggregation schemes. Finally, we further enhance the overall video representation by learning more diversified graph-level representations of multiple granularities based on mutual information minimization. Extensive experiments on three widely-adopted benchmarks clearly demonstrate the effectiveness of the proposed framework. Notably, 90.0% top-1 accuracy on MARS is achieved using MGH, outperforming the state-of-the-arts. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yan_Learning_Multi-Granular_Hypergraphs_for_Video-Based_Person_Re-Identification_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=StZT8JnmrjY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yan_Learning_Multi-Granular_Hypergraphs_for_Video-Based_Person_Re-Identification_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yan_Learning_Multi-Granular_Hypergraphs_for_Video-Based_Person_Re-Identification_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Towards Unified INT8 Training for Convolutional Neural Network | Feng Zhu, Ruihao Gong, Fengwei Yu, Xianglong Liu, Yanfei Wang, Zhelong Li, Xiuqi Yang, Junjie Yan | Recently low-bit (e.g., 8-bit) network quantization has been extensively studied to accelerate the inference. Besides inference, low-bit training with quantized gradients can further bring more considerable acceleration, since the backward process is often computation-intensive. Unfortunately, the inappropriate quantization of backward propagation usually makes the training unstable and even crash. There lacks a successful unified low-bit training framework that can support diverse networks on various tasks. In this paper, we give an attempt to build a unified 8-bit (INT8) training framework for common convolutional neural networks from the aspects of both accuracy and speed. First, we empirically find the four distinctive characteristics of gradients, which provide us insightful clues for gradient quantization. Then, we theoretically give an in-depth analysis of the convergence bound and derive two principles for stable INT8 training. Finally, we propose two universal techniques, including Direction Sensitive Gradient Clipping that reduces the direction deviation of gradients and Deviation Counteractive Learning Rate Scaling that avoids illegal gradient update along the wrong direction. The experiments show that our unified solution promises accurate and efficient INT8 training for a variety of networks and tasks, including MobileNetV2, InceptionV3 and object detection that prior studies have never succeeded. Moreover, it enjoys a strong flexibility to run on off-the-shelf hardware, and reduces the training time by 22% on Pascal GPU without too much optimization effort. We believe that this pioneering study will help lead the community towards a fully unified INT8 training for convolutional neural networks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhu_Towards_Unified_INT8_Training_for_Convolutional_Neural_Network_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.12607 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhu_Towards_Unified_INT8_Training_for_Convolutional_Neural_Network_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhu_Towards_Unified_INT8_Training_for_Convolutional_Neural_Network_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhu_Towards_Unified_INT8_CVPR_2020_supplemental.pdf | null | null |
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes | Sravanti Addepalli, Vivek B.S., Arya Baburaj, Gaurang Sriramanan, R. Venkatesh Babu | As humans, we inherently perceive images based on their predominant features, and ignore noise embedded within lower bit planes. On the contrary, Deep Neural Networks are known to confidently misclassify images corrupted with meticulously crafted perturbations that are nearly imperceptible to the human eye. In this work, we attempt to address this problem by training networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction. We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly when compared to a normally trained model. Present state-of-the-art defenses against adversarial attacks require the networks to be explicitly trained using adversarial samples that are computationally expensive to generate. While such methods that use adversarial training continue to achieve the best results, this work paves the way towards achieving robustness without having to explicitly train on adversarial samples. The proposed approach is therefore faster, and also closer to the natural learning process in humans. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Addepalli_Towards_Achieving_Adversarial_Robustness_by_Enforcing_Feature_Consistency_Across_Bit_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=JnoPocpLEoo | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Addepalli_Towards_Achieving_Adversarial_Robustness_by_Enforcing_Feature_Consistency_Across_Bit_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Addepalli_Towards_Achieving_Adversarial_Robustness_by_Enforcing_Feature_Consistency_Across_Bit_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Addepalli_Towards_Achieving_Adversarial_CVPR_2020_supplemental.pdf | null | null |
HOnnotate: A Method for 3D Annotation of Hand and Object Poses | Shreyas Hampali, Mahdi Rad, Markus Oberweger, Vincent Lepetit | We propose a method for annotating images of a hand manipulating an object with the 3D poses of both the hand and the object, together with a dataset created using this method. Our motivation is the current lack of annotated real images for this problem, as estimating the 3D poses is challenging, mostly because of the mutual occlusions between the hand and the object. To tackle this challenge, we capture sequences with one or several RGB-D cameras and jointly optimize the 3D hand and object poses over all the frames simultaneously. This method allows us to automatically annotate each frame with accurate estimates of the poses, despite large mutual occlusions. With this method, we created HO-3D, the first markerless dataset of color images with 3D annotations for both the hand and object. This dataset is currently made of 77,558 frames, 68 sequences, 10 persons, and 10 objects. Using our dataset, we develop a single RGB image-based method to predict the hand pose when interacting with objects under severe occlusions and show it generalizes to objects not seen in the dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hampali_HOnnotate_A_Method_for_3D_Annotation_of_Hand_and_Object_CVPR_2020_paper.pdf | http://arxiv.org/abs/1907.01481 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Hampali_HOnnotate_A_Method_for_3D_Annotation_of_Hand_and_Object_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Hampali_HOnnotate_A_Method_for_3D_Annotation_of_Hand_and_Object_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality | Zhenqiang Ying, Haoran Niu, Praful Gupta, Dhruv Mahajan, Deepti Ghadiyaram, Alan Bovik | Blind or no-reference (NR) perceptual picture quality prediction is a difficult, unsolved problem of great consequence to the social and streaming media industries that impacts billions of viewers daily. Unfortunately, popular NR prediction models perform poorly on real-world distorted pictures. To advance progress on this problem, we introduce the largest (by far) subjective picture quality database, containing about 40, 000 real-world distorted pictures and 120, 000 patches, on which we collected about 4M human judgments of picture quality. Using these picture and patch quality labels, we built deep region-based architectures that learn to produce state-of-the-art global picture quality predictions as well as useful local picture quality maps. Our innovations include picture quality prediction architectures that produce global-to-local inferences as well as local-to-global inferences (via feedback). The dataset and source code are available at https: //live.ece.utexas.edu/research.php. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Ying_From_Patches_to_Pictures_PaQ-2-PiQ_Mapping_the_Perceptual_Space_of_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=Hg2ZSVlpQjE | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Ying_From_Patches_to_Pictures_PaQ-2-PiQ_Mapping_the_Perceptual_Space_of_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Ying_From_Patches_to_Pictures_PaQ-2-PiQ_Mapping_the_Perceptual_Space_of_CVPR_2020_paper.html | CVPR 2020 | null | https://cove.thecvf.com/datasets/349 | null |
Fast MSER | Hailiang Xu, Siqi Xie, Fan Chen | Maximally Stable Extremal Regions (MSER) algorithms are based on the component tree and are used to detect invariant regions. OpenCV MSER, the most popular MSER implementation, uses a linked list to associate pixels with ERs. The data-structure of an ER contains the attributes of a head and a tail linked node, which makes OpenCV MSER hard to be performed in parallel using existing parallel component tree strategies. Besides, pixel extraction (i.e. extracting the pixels in MSERs) in OpenCV MSER is very slow. In this paper, we propose two novel MSER algorithms, called Fast MSER V1 and V2. They first divide an image into several spatial partitions, then construct sub-trees and doubly linked lists (for V1) or a labelled image (for V2) on the partitions in parallel. A novel sub-tree merging algorithm is used in V1 to merge the sub-trees into the final tree, and the doubly linked lists are also merged in the process. While V2 merges the sub-trees using an existing merging algorithm. Finally, MSERs are recognized, the pixels in them are extracted through two novel pixel extraction methods taking advantage of the fact that a lot of pixels in parent and child MSERs are duplicated. Both V1 and V2 outperform three open source MSER algorithms (28 and 26 times faster than OpenCV MSER), and reduce the memory of the pixels in MSERs by 78%. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xu_Fast_MSER_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Fast_MSER_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Fast_MSER_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Seeing Around Street Corners: Non-Line-of-Sight Detection and Tracking In-the-Wild Using Doppler Radar | Nicolas Scheiner, Florian Kraus, Fangyin Wei, Buu Phan, Fahim Mannan, Nils Appenrodt, Werner Ritter, Jurgen Dickmann, Klaus Dietmayer, Bernhard Sick, Felix Heide | Conventional sensor systems record information about directly visible objects, whereas occluded scene components are considered lost in the measurement process. Non-line-of-sight (NLOS) methods try to recover such hidden objects from their indirect reflections - faint signal components, traditionally treated as measurement noise. Existing NLOS approaches struggle to record these low-signal components outside the lab, and do not scale to large-scale outdoor scenes and high-speed motion, typical in automotive scenarios. In particular, optical NLOS capture is fundamentally limited by the quartic intensity falloff of diffuse indirect reflections. In this work, we depart from visible-wavelength approaches and demonstrate detection, classification, and tracking of hidden objects in large-scale dynamic environments using Doppler radars that can be manufactured at low-cost in series production. To untangle noisy indirect and direct reflections, we learn from temporal sequences of Doppler velocity and position measurements, which we fuse in a joint NLOS detection and tracking network over time. We validate the approach on in-the-wild automotive scenes, including sequences of parked cars or house facades as relay surfaces, and demonstrate low-cost, real-time NLOS in dynamic automotive environments. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Scheiner_Seeing_Around_Street_Corners_Non-Line-of-Sight_Detection_and_Tracking_In-the-Wild_Using_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.06613 | https://www.youtube.com/watch?v=y1WUHuZd8Mg | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Scheiner_Seeing_Around_Street_Corners_Non-Line-of-Sight_Detection_and_Tracking_In-the-Wild_Using_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Scheiner_Seeing_Around_Street_Corners_Non-Line-of-Sight_Detection_and_Tracking_In-the-Wild_Using_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Scheiner_Seeing_Around_Street_CVPR_2020_supplemental.pdf | null | null |
Weakly Supervised Visual Semantic Parsing | Alireza Zareian, Svebor Karaman, Shih-Fu Chang | Scene Graph Generation (SGG) aims to extract entities, predicates and their semantic structure from images, enabling deep understanding of visual content, with many applications such as visual reasoning and image retrieval. Nevertheless, existing SGG methods require millions of manually annotated bounding boxes for training, and are computationally inefficient, as they exhaustively process all pairs of object proposals to detect predicates. In this paper, we address those two limitations by first proposing a generalized formulation of SGG, namely Visual Semantic Parsing, which disentangles entity and predicate recognition, and enables sub-quadratic performance. Then we propose the Visual Semantic Parsing Network, VSPNet, based on a dynamic, attention-based, bipartite message passing framework that jointly infers graph nodes and edges through an iterative process. Additionally, we propose the first graph-based weakly supervised learning framework, based on a novel graph alignment algorithm, which enables training without bounding box annotations. Through extensive experiments, we show that VSPNet outperforms weakly supervised baselines significantly and approaches fully supervised performance, while being several times faster. We publicly release the source code of our method. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zareian_Weakly_Supervised_Visual_Semantic_Parsing_CVPR_2020_paper.pdf | http://arxiv.org/abs/2001.02359 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zareian_Weakly_Supervised_Visual_Semantic_Parsing_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zareian_Weakly_Supervised_Visual_Semantic_Parsing_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Bringing Old Photos Back to Life | Ziyu Wan, Bo Zhang, Dongdong Chen, Pan Zhang, Dong Chen, Jing Liao, Fang Wen | We propose to restore old photos that suffer from severe degradation through a deep learning approach. Unlike conventional restoration tasks that can be solved through supervised learning, the degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize. Therefore, we propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs. Specifically, we train two variational autoencoders (VAEs) to respectively transform old photos and clean photos into two latent spaces. And the translation between these two latent spaces is learned with synthetic paired data. This translation generalizes well to real photos because the domain gap is closed in the compact latent space. Besides, to address multiple degradations mixed in one old photo, we design a global branch with a partial nonlocal block targeting to the structured defects, such as scratches and dust spots, and a local branch targeting to the unstructured defects, such as noises and blurriness. Two branches are fused in the latent space, leading to improved capability to restore old photos from multiple defects. The proposed method outperforms state-of-the-art methods in terms of visual quality for old photos restoration. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wan_Bringing_Old_Photos_Back_to_Life_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.09484 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wan_Bringing_Old_Photos_Back_to_Life_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wan_Bringing_Old_Photos_Back_to_Life_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wan_Bringing_Old_Photos_CVPR_2020_supplemental.pdf | null | null |
Enhanced Blind Face Restoration With Multi-Exemplar Images and Adaptive Spatial Feature Fusion | Xiaoming Li, Wenyu Li, Dongwei Ren, Hongzhi Zhang, Meng Wang, Wangmeng Zuo | In many real-world face restoration applications, e.g., smartphone photo albums and old films, multiple high-quality (HQ) images of the same person usually are available for a given degraded low-quality (LQ) observation. However, most existing guided face restoration methods are based on single HQ exemplar image, and are limited in properly exploiting guidance for improving the generalization ability to unknown degradation process. To address these issues, this paper suggests to enhance blind face restoration performance by utilizing multi-exemplar images and adaptive fusion of features from guidance and degraded images. First, given a degraded observation, we select the optimal guidance based on the weighted affine distance on landmark sets, where the landmark weights are learned to make the guidance image optimized to HQ image reconstruction. Second, moving least-square and adaptive instance normalization are leveraged for spatial alignment and illumination translation of guidance image in the feature space. Finally, for better feature fusion, multiple adaptive spatial feature fusion (ASFF) layers are introduced to incorporate guidance features in an adaptive and progressive manner, resulting in our ASFFNet. Experiments show that our ASFFNet performs favorably in terms of quantitative and qualitative evaluation, and is effective in generating photo-realistic results on real-world LQ images. The source code and models are available at https://github.com/csxmli2016/ASFFNet. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Enhanced_Blind_Face_Restoration_With_Multi-Exemplar_Images_and_Adaptive_Spatial_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=5GR_gMKqfuI | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Enhanced_Blind_Face_Restoration_With_Multi-Exemplar_Images_and_Adaptive_Spatial_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Enhanced_Blind_Face_Restoration_With_Multi-Exemplar_Images_and_Adaptive_Spatial_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Enhanced_Blind_Face_CVPR_2020_supplemental.pdf | null | null |
Geometry-Aware Satellite-to-Ground Image Synthesis for Urban Areas | Xiaohu Lu, Zuoyue Li, Zhaopeng Cui, Martin R. Oswald, Marc Pollefeys, Rongjun Qin | We present a novel method for generating panoramic street-view images which are geometrically consistent with a given satellite image. Different from existing approaches that completely rely on a deep learning architecture to generalize cross-view image distributions, our approach explicitly loops in the geometric configuration of the ground objects based on the satellite views, such that the produced ground view synthesis preserves the geometric shape and the semantics of the scene. In particular, we propose a neural network with a geo-transformation layer that turns predicted ground-height values from the satellite view to a ground view while retaining the physical satellite-to-ground relation. Our results show that the synthesized image retains well-articulated and authentic geometric shapes, as well as texture richness of the street-view in various scenarios. Both qualitative and quantitative results demonstrate that our method compares favorably to other state-of-the-art approaches that lack geometric consistency. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lu_Geometry-Aware_Satellite-to-Ground_Image_Synthesis_for_Urban_Areas_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Geometry-Aware_Satellite-to-Ground_Image_Synthesis_for_Urban_Areas_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Geometry-Aware_Satellite-to-Ground_Image_Synthesis_for_Urban_Areas_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Lu_Geometry-Aware_Satellite-to-Ground_Image_CVPR_2020_supplemental.pdf | null | null |
End-to-End Learning Local Multi-View Descriptors for 3D Point Clouds | Lei Li, Siyu Zhu, Hongbo Fu, Ping Tan, Chiew-Lan Tai | In this work, we propose an end-to-end framework to learn local multi-view descriptors for 3D point clouds. To adopt a similar multi-view representation, existing studies use hand-crafted viewpoints for rendering in a preprocessing stage, which is detached from the subsequent descriptor learning stage. In our framework, we integrate the multi-view rendering into neural networks by using a differentiable renderer, which allows the viewpoints to be optimizable parameters for capturing more informative local context of interest points. To obtain discriminative descriptors, we also design a soft-view pooling module to attentively fuse convolutional features across views. Extensive experiments on existing 3D registration benchmarks show that our method outperforms existing local descriptors both quantitatively and qualitatively. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_End-to-End_Learning_Local_Multi-View_Descriptors_for_3D_Point_Clouds_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.05855 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_End-to-End_Learning_Local_Multi-View_Descriptors_for_3D_Point_Clouds_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_End-to-End_Learning_Local_Multi-View_Descriptors_for_3D_Point_Clouds_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_End-to-End_Learning_Local_CVPR_2020_supplemental.pdf | null | null |
Multi-Scale Boosted Dehazing Network With Dense Feature Fusion | Hang Dong, Jinshan Pan, Lei Xiang, Zhe Hu, Xinyi Zhang, Fei Wang, Ming-Hsuan Yang | In this paper, we propose a Multi-Scale Boosted Dehazing Network with Dense Feature Fusion based on the U-Net architecture. The proposed method is designed based on two principles, boosting and error feedback, and we show that they are suitable for the dehazing problem. By incorporating the Strengthen-Operate-Subtract boosting strategy in the decoder of the proposed model, we develop a simple yet effective boosted decoder to progressively restore the haze-free image. To address the issue of preserving spatial information in the U-Net architecture, we design a dense feature fusion module using the back-projection feedback scheme. We show that the dense feature fusion module can simultaneously remedy the missing spatial information from high-resolution features and exploit the non-adjacent features. Extensive evaluations demonstrate that the proposed model performs favorably against the state-of-the-art approaches on the benchmark datasets as well as real-world hazy images. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Dong_Multi-Scale_Boosted_Dehazing_Network_With_Dense_Feature_Fusion_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.13388 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Dong_Multi-Scale_Boosted_Dehazing_Network_With_Dense_Feature_Fusion_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Dong_Multi-Scale_Boosted_Dehazing_Network_With_Dense_Feature_Fusion_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Dong_Multi-Scale_Boosted_Dehazing_CVPR_2020_supplemental.pdf | null | null |
A Multi-Hypothesis Approach to Color Constancy | Daniel Hernandez-Juarez, Sarah Parisot, Benjamin Busam, Ales Leonardis, Gregory Slabaugh, Steven McDonagh | Contemporary approaches frame the color constancy problem as learning camera specific illuminant mappings. While high accuracy can be achieved on camera specific data, these models depend on camera spectral sensitivity and typically exhibit poor generalisation to new devices. Additionally, regression methods produce point estimates that do not explicitly account for potential ambiguities among plausible illuminant solutions, due to the ill-posed nature of the problem. We propose a Bayesian framework that naturally handles color constancy ambiguity via a multi-hypothesis strategy. Firstly, we select a set of candidate scene illuminants in a data-driven fashion and apply them to a target image to generate a set of corrected images. Secondly, we estimate, for each corrected image, the likelihood of the light source being achromatic using a camera-agnostic CNN. Finally, our method explicitly learns a final illumination estimate from the generated posterior probability distribution. Our likelihood estimator learns to answer a camera-agnostic question and thus enables effective multi-camera training by disentangling illuminant estimation from the supervised learning task. We extensively evaluate our proposed approach and additionally set a benchmark for novel sensor generalisation without re-training. Our method provides state-of-the-art accuracy on multiple public datasets (up to 11% median angular error improvement) while maintaining real-time execution. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hernandez-Juarez_A_Multi-Hypothesis_Approach_to_Color_Constancy_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Hernandez-Juarez_A_Multi-Hypothesis_Approach_to_Color_Constancy_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Hernandez-Juarez_A_Multi-Hypothesis_Approach_to_Color_Constancy_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Hernandez-Juarez_A_Multi-Hypothesis_Approach_CVPR_2020_supplemental.zip | null | null |
MSeg: A Composite Dataset for Multi-Domain Semantic Segmentation | John Lambert, Zhuang Liu, Ozan Sener, James Hays, Vladlen Koltun | We present MSeg, a composite dataset that unifies se- mantic segmentation datasets from different domains. A naive merge of the constituent datasets yields poor performance due to inconsistent taxonomies and annotation practices. We reconcile the taxonomies and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images. The resulting composite dataset enables training a single semantic segmentation model that functions effectively across domains and generalizes to datasets that were not seen during training. We adopt zero-shot cross-dataset transfer as a benchmark to systematically evaluate a model's robustness and show that MSeg training yields substantially more robust models in comparison to training on individual datasets or naive mixing of datasets without the presented contributions. A model trained on MSeg ranks first on the WildDash leaderboard for robust semantic segmentation, with no exposure to WildDash data during training. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lambert_MSeg_A_Composite_Dataset_for_Multi-Domain_Semantic_Segmentation_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lambert_MSeg_A_Composite_Dataset_for_Multi-Domain_Semantic_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lambert_MSeg_A_Composite_Dataset_for_Multi-Domain_Semantic_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Learning Event-Based Motion Deblurring | Zhe Jiang, Yu Zhang, Dongqing Zou, Jimmy Ren, Jiancheng Lv, Yebin Liu | Recovering sharp video sequence from a motion-blurred image is highly ill-posed due to the significant loss of motion information in the blurring process. For event-based cameras, however, fast motion can be captured as events at high frame rate, raising new opportunities to exploring effective solutions. In this paper, we start from a sequential formulation of event-based motion deblurring, then show how its optimization can be unfolded with a novel end-toend deep architecture. The proposed architecture is a convolutional recurrent neural network that integrates visual and temporal knowledge of both global and local scales in principled manner. To further improve the reconstruction, we propose a differentiable directional event filtering module to effectively extract rich boundary prior from the evolution of events. We conduct extensive experiments on the synthetic GoPro dataset and a large newly introduced dataset captured by a DAVIS240C camera. The proposed approach achieves state-of-the-art reconstruction quality, and generalizes better to handling real-world motion blur. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jiang_Learning_Event-Based_Motion_Deblurring_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.05794 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_Learning_Event-Based_Motion_Deblurring_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_Learning_Event-Based_Motion_Deblurring_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
DMCP: Differentiable Markov Channel Pruning for Neural Networks | Shaopeng Guo, Yujie Wang, Quanquan Li, Junjie Yan | Recent works imply that the channel pruning can be regarded as searching optimal sub-structure from unpruned networks. However, existing works based on this observation require training and evaluating a large number of structures, which limits their application. In this paper, we propose a novel differentiable method for channel pruning, named Differentiable Markov Channel Pruning (DMCP), to efficiently search the optimal sub-structure. Our method is differentiable and can be directly optimized by gradient descent with respect to standard task loss and budget regularization (e.g. FLOPs constraint). In DMCP, we model the channel pruning as a Markov process, in which each state represents for retaining the corresponding channel during pruning, and transitions between states denote the pruning process. In the end, our method is able to implicitly select the proper number of channels in each layer by the Markov process with optimized transitions. To validate the effectiveness of our method, we perform extensive experiments on Imagenet with ResNet and MobilenetV2. Results show our method can achieve consistent improvement than state-of-the-art pruning methods in various FLOPs settings. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Guo_DMCP_Differentiable_Markov_Channel_Pruning_for_Neural_Networks_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.03354 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Guo_DMCP_Differentiable_Markov_Channel_Pruning_for_Neural_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Guo_DMCP_Differentiable_Markov_Channel_Pruning_for_Neural_Networks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Guo_DMCP_Differentiable_Markov_CVPR_2020_supplemental.pdf | null | null |
From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement | Wenhan Yang, Shiqi Wang, Yuming Fang, Yue Wang, Jiaying Liu | Under-exposure introduces a series of visual degradation, i.e. decreased visibility, intensive noise, and biased color, etc. To address these problems, we propose a novel semi-supervised learning approach for low-light image enhancement. A deep recursive band network (DRBN) is proposed to recover a linear band representation of an enhanced normal-light image with paired low/normal-light images, and then obtain an improved one by recomposing the given bands via another learnable linear transformation based on a perceptual quality-driven adversarial learning with unpaired data. The architecture is powerful and flexible to have the merit of training with both paired and unpaired data. On one hand, the proposed network is well designed to extract a series of coarse-to-fine band representations, whose estimations are mutually beneficial in a recursive process. On the other hand, the extracted band representation of the enhanced image in the first stage of DRBN (recursive band learning) bridges the gap between the restoration knowledge of paired data and the perceptual quality preference to real high-quality images. Its second stage (band recomposition) learns to recompose the band representation towards fitting perceptual properties of high-quality images via adversarial learning. With the help of this two-stage design, our approach generates enhanced results with well-reconstructed details and visually promising contrast and color distributions. Qualitative and quantitative evaluations demonstrate the superiority of our DRBN. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_From_Fidelity_to_Perceptual_Quality_A_Semi-Supervised_Approach_for_Low-Light_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=J5ogMvSDdF4 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_From_Fidelity_to_Perceptual_Quality_A_Semi-Supervised_Approach_for_Low-Light_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_From_Fidelity_to_Perceptual_Quality_A_Semi-Supervised_Approach_for_Low-Light_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yang_From_Fidelity_to_CVPR_2020_supplemental.pdf | null | null |
Learning Integral Objects With Intra-Class Discriminator for Weakly-Supervised Semantic Segmentation | Junsong Fan, Zhaoxiang Zhang, Chunfeng Song, Tieniu Tan | Image-level weakly-supervised semantic segmentation (WSSS) aims at learning semantic segmentation by adopting only image class labels. Existing approaches generally rely on class activation maps (CAM) to generate pseudo-masks and then train segmentation models. The main difficulty is that the CAM estimate only covers partial foreground objects. In this paper, we argue that the critical factor preventing to obtain the full object mask is the classification boundary mismatch problem in applying the CAM to WSSS. Because the CAM is optimized by the classification task, it focuses on the discrimination across different image-level classes. However, the WSSS requires to distinguish pixels sharing the same image-level class to separate them into the foreground and the background. To alleviate this contradiction, we propose an efficient end-to-end Intra-Class Discriminator (ICD) framework, which learns intra-class boundaries to help separate the foreground and the background within each image-level class. Without bells and whistles, our approach achieves the state-of-the-art performance of image label based WSSS, with mIoU 68.0% on the VOC 2012 semantic segmentation benchmark, demonstrating the effectiveness of the proposed approach. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Fan_Learning_Integral_Objects_With_Intra-Class_Discriminator_for_Weakly-Supervised_Semantic_Segmentation_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=3ulix8rfBhY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Fan_Learning_Integral_Objects_With_Intra-Class_Discriminator_for_Weakly-Supervised_Semantic_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Fan_Learning_Integral_Objects_With_Intra-Class_Discriminator_for_Weakly-Supervised_Semantic_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction | Yantao Lu, Yunhan Jia, Jianyu Wang, Bai Li, Weiheng Chai, Lawrence Carin, Senem Velipasalar | Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they remain adversarial even against other models. Although significant effort has been devoted to the transferability across models, surprisingly little attention has been paid to cross-task transferability, which represents the real-world cybercriminal's situation, where an ensemble of different defense/detection mechanisms need to be evaded all at once. We investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, object detection, semantic segmentation, explicit content detection, and text detection. Our proposed attack minimizes the "dispersion" of the internal feature map, overcoming the limitations of existing attacks, that require task-specific loss functions and/or probing a target model. We conduct evaluation on open-source detection and segmentation models, as well as four different computer vision tasks provided by Google Cloud Vision (GCV) APIs. We demonstrate that our approach outperforms existing attacks by degrading performance of multiple CV tasks by a large margin with only modest perturbations. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lu_Enhancing_Cross-Task_Black-Box_Transferability_of_Adversarial_Examples_With_Dispersion_Reduction_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.11616 | https://www.youtube.com/watch?v=hMEyXBqWa08 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Enhancing_Cross-Task_Black-Box_Transferability_of_Adversarial_Examples_With_Dispersion_Reduction_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_Enhancing_Cross-Task_Black-Box_Transferability_of_Adversarial_Examples_With_Dispersion_Reduction_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Lu_Enhancing_Cross-Task_Black-Box_CVPR_2020_supplemental.pdf | null | null |
Mesh-Guided Multi-View Stereo With Pyramid Architecture | Yuesong Wang, Tao Guan, Zhuo Chen, Yawei Luo, Keyang Luo, Lili Ju | Multi-view stereo (MVS) aims to reconstruct 3D geometry of the target scene by using only information from 2D images. Although much progress has been made, it still suffers from textureless regions. To overcome this difficulty, we propose a mesh-guided MVS method with pyramid architecture, which makes use of the surface mesh obtained from coarse-scale images to guide the reconstruction process. Specifically, a PatchMatch-based MVS algorithm is first used to generate depth maps for coarse-scale images and the corresponding surface mesh is obtained by a surface reconstruction algorithm. Next we project the mesh onto each of depth maps to replace unreliable depth values and the corrected depth maps are fed to fine-scale reconstruction for initialization. To alleviate the influence of possible erroneous faces on the mesh, we further design and train a convolutional neural network to remove incorrect depths. In addition, it is often hard for the correct depth values for low-textured regions to survive at the fine-scale, thus we also develop an efficient method to seek out these regions and further enforce the geometric consistency in these regions. Experimental results on the ETH3D high-resolution dataset demonstrate that our method achieves state-of-the-art performance, especially in completeness. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Mesh-Guided_Multi-View_Stereo_With_Pyramid_Architecture_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Mesh-Guided_Multi-View_Stereo_With_Pyramid_Architecture_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Mesh-Guided_Multi-View_Stereo_With_Pyramid_Architecture_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
BSP-Net: Generating Compact Meshes via Binary Space Partitioning | Zhiqin Chen, Andrea Tagliasacchi, Hao Zhang | Polygonal meshes are ubiquitous in the digital 3D domain, yet they have only played a minor role in the deep learning revolution. Leading methods for learning generative models of shapes rely on implicit functions, and generate meshes only after expensive iso-surfacing routines. To overcome these challenges, we are inspired by a classical spatial data structure from computer graphics, Binary Space Partitioning (BSP), to facilitate 3D learning. The core ingredient of BSP is an operation for recursive subdivision of space to obtain convex sets. By exploiting this property, we devise BSP-Net, a network that learns to represent a 3D shape via convex decomposition. Importantly, BSP-Net is unsupervised since no convex shape decompositions are needed for training. The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built on a set of planes. The convexes inferred by BSP-Net can be easily extracted to form a polygon mesh, without any need for iso-surfacing. The generated meshes are compact (i.e., low-poly) and well suited to represent sharp geometry; they are guaranteed to be watertight and can be easily parameterized. We also show that the reconstruction quality by BSP-Net is competitive with state-of-the-art methods while using much fewer primitives. Code is available at https://github.com/czq142857/BSP-NET-original. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chen_BSP-Net_Generating_Compact_Meshes_via_Binary_Space_Partitioning_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_BSP-Net_Generating_Compact_Meshes_via_Binary_Space_Partitioning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_BSP-Net_Generating_Compact_Meshes_via_Binary_Space_Partitioning_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Which Is Plagiarism: Fashion Image Retrieval Based on Regional Representation for Design Protection | Yining Lang, Yuan He, Fan Yang, Jianfeng Dong, Hui Xue | With the rapid growth of e-commerce and the popularity of online shopping, fashion retrieval has received considerable attention in the computer vision community. Different from the existing works that mainly focus on identical or similar fashion item retrieval, in this paper, we aim to study the plagiarized clothes retrieval which is somewhat ignored in the academic community while itself has great application value. One of the key challenges is that plagiarized clothes are usually modified in a certain region on the original design to escape the supervision by traditional retrieval methods. To relieve it, we propose a novel network named Plagiarized-Search-Net (PS-Net) based on regional representation, where we utilize the landmarks to guide the learning of regional representations and compare fashion items region by region. Besides, we propose a new dataset named Plagiarized Fashion for plagiarized clothes retrieval, which provides a meaningful complement to the existing fashion retrieval field. Experiments on Plagiarized Fashion dataset verify that our approach is superior to other instance-level counterparts for plagiarized clothes retrieval, showing a promising result for original design protection. Moreover, our PS-Net can also be adapted to traditional fashion retrieval and landmark estimation tasks and achieves the state-of-the-art performance on the DeepFashion and DeepFashion2 datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lang_Which_Is_Plagiarism_Fashion_Image_Retrieval_Based_on_Regional_Representation_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=JRTIv-8EUbU | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lang_Which_Is_Plagiarism_Fashion_Image_Retrieval_Based_on_Regional_Representation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lang_Which_Is_Plagiarism_Fashion_Image_Retrieval_Based_on_Regional_Representation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Relation-Aware Global Attention for Person Re-Identification | Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, Xin Jin, Zhibo Chen | For person re-identification (re-id), attention mechanisms have become attractive as they aim at strengthening discriminative features and suppressing irrelevant ones, which matches well the key of re-id, i.e., discriminative feature learning. Previous approaches typically learn attention using local convolutions, ignoring the mining of knowledge from global structure patterns. Intuitively, the affinities among spatial positions/nodes in the feature map provide clustering-like information and are helpful for inferring semantics and thus attention, especially for person images where the feasible human poses are constrained. In this work, we propose an effective Relation-Aware Global Attention (RGA) module which captures the global structural information for better attention learning. Specifically, for each feature position, in order to compactly grasp the structural information of global scope and local appearance information, we propose to stack the relations, i.e., its pairwise correlations/affinities with all the feature positions (e.g., in raster scan order), and the feature itself together to learn the attention with a shallow convolutional model. Extensive ablation studies demonstrate that our RGA can significantly enhance the feature representation power and help achieve the state-of-the-art performance on several popular benchmarks. The source code is available at https://github.com/microsoft/Relation-Aware-Global-Attention-Networks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Relation-Aware_Global_Attention_for_Person_Re-Identification_CVPR_2020_paper.pdf | http://arxiv.org/abs/1904.02998 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Relation-Aware_Global_Attention_for_Person_Re-Identification_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Relation-Aware_Global_Attention_for_Person_Re-Identification_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhang_Relation-Aware_Global_Attention_CVPR_2020_supplemental.pdf | null | null |
Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles | Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, A. K. Qin, Yun Yang | Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing works have mostly focused on either digital adversarial examples created via small and imperceptible perturbations, or physical-world adversarial examples created with large and less realistic distortions that are easily identified by human observers. In this paper, we propose a novel approach, called Adversarial Camouflage (AdvCam), to craft and camouflage physical-world adversarial examples into natural styles that appear legitimate to human observers. Specifically, AdvCam transfers large adversarial perturbations into customized styles, which are then "hidden" on-target object or off-target background. Experimental evaluation shows that, in both digital and physical-world scenarios, adversarial examples crafted by AdvCam are well camouflaged and highly stealthy, while remaining effective in fooling state-of-the-art DNN image classifiers. Hence, AdvCam is a flexible approach that can help craft stealthy attacks to evaluate the robustness of DNNs. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Duan_Adversarial_Camouflage_Hiding_Physical-World_Attacks_With_Natural_Styles_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.08757 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Duan_Adversarial_Camouflage_Hiding_Physical-World_Attacks_With_Natural_Styles_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Duan_Adversarial_Camouflage_Hiding_Physical-World_Attacks_With_Natural_Styles_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Evolving Losses for Unsupervised Video Representation Learning | AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo | We present a new method to learn video representations from large-scale unlabeled video data. Ideally, this representation will be generic and transferable, directly usable for new tasks such as action recognition and zero or few-shot learning. We formulate unsupervised representation learning as a multi-modal, multi-task learning problem, where the representations are shared across different modalities via distillation. Further, we introduce the concept of loss function evolution by using an evolutionary search algorithm to automatically find optimal combination of loss functions capturing many (self-supervised) tasks and modalities. Thirdly, we propose an unsupervised representation evaluation metric using distribution matching to a large unlabeled dataset as a prior constraint, based on Zipf's law. This unsupervised constraint, which is not guided by any labeling, produces similar results to weakly-supervised, task-specific ones. The proposed unsupervised representation learning results in a single RGB network and outperforms previous methods. Notably, it is also more effective than several label-based methods (e.g., ImageNet), with the exception of large, fully labeled video datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Piergiovanni_Evolving_Losses_for_Unsupervised_Video_Representation_Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.12177 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Piergiovanni_Evolving_Losses_for_Unsupervised_Video_Representation_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Piergiovanni_Evolving_Losses_for_Unsupervised_Video_Representation_Learning_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
3DV: 3D Dynamic Voxel for Action Recognition in Depth Video | Yancheng Wang, Yang Xiao, Fu Xiong, Wenxiang Jiang, Zhiguo Cao, Joey Tianyi Zhou, Junsong Yuan | For depth-based 3D action recognition, one essential issue is to represent 3D motion pattern effectively and efficiently. To this end, 3D dynamic voxel (3DV) is proposed as a novel 3D motion representation manner. With 3D space voxelization, the key idea of 3DV is to encode the 3D motion information within depth video into a regular voxel set (i.e., 3DV) compactly, via temporal rank pooling. Each available 3DV voxel intrinsically involves 3D spatial and motion feature for 3D action description. 3DV is then abstracted as a point set and input into PointNet++ for 3D action recognition, in the end-to-end learning way. The intuition for transferring 3DV into the point set form is that, PointNet++ is lightweight and effective for deep feature learning towards point set. Since 3DV may loose appearance clue, a multi-stream 3D action recognition manner is also proposed to learn motion and appearance feature jointly. To extract richer temporal order information of actions, we also split the depth video into temporal segments and encode this procedure in 3DV integrally. The extensive experiments on the well-established benchmark datasets (e.g., NTU RGB+D 120 and NTU RGB+D 60) demonstrate the superiority of our proposition. Impressively, we acquire the accuracy of 82.4% and 93.5% on NTU RGB+D 120 with the cross-subject and cross-setup test setting respectively. 3DV's code is available at https://github.com/3huo/3DV-Action. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_3DV_3D_Dynamic_Voxel_for_Action_Recognition_in_Depth_Video_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.05501 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_3DV_3D_Dynamic_Voxel_for_Action_Recognition_in_Depth_Video_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_3DV_3D_Dynamic_Voxel_for_Action_Recognition_in_Depth_Video_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Benchmarking Adversarial Robustness on Image Classification | Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu | Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning. While a lot of efforts have been made in recent years, it is of great significance to perform correct and complete evaluations of the adversarial attack and defense algorithms. In this paper, we establish a comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks. After briefly reviewing plenty of representative attack and defense methods, we perform large-scale experiments with two robustness curves as the fair-minded evaluation criteria to fully understand the performance of these methods. Based on the evaluation results, we draw several important findings that can provide insights for future research, including: 1) The relative robustness between models can change across different attack configurations, thus it is encouraged to adopt the robustness curves to evaluate adversarial robustness; 2) As one of the most effective defense techniques, adversarial training can generalize across different threat models; 3) Randomization-based defenses are more robust to query-based black-box attacks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Dong_Benchmarking_Adversarial_Robustness_on_Image_Classification_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Dong_Benchmarking_Adversarial_Robustness_on_Image_Classification_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Dong_Benchmarking_Adversarial_Robustness_on_Image_Classification_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Dong_Benchmarking_Adversarial_Robustness_CVPR_2020_supplemental.pdf | null | null |
What It Thinks Is Important Is Important: Robustness Transfers Through Input Gradients | Alvin Chan, Yi Tay, Yew-Soon Ong | Adversarial perturbations are imperceptible changes to input pixels that can change the prediction of deep learning models. Learned weights of models robust to such perturbations are previously found to be transferable across different tasks but this applies only if the model architecture for the source and target tasks is the same. Input gradients characterize how small changes at each input pixel affect the model output. Using only natural images, we show here that training a student model's input gradients to match those of a robust teacher model can gain robustness close to a strong baseline that is robustly trained from scratch. Through experiments in MNIST, CIFAR-10, CIFAR-100 and Tiny-ImageNet, we show that our proposed method, input gradient adversarial matching, can transfer robustness across different tasks and even across different model architectures. This demonstrates that directly targeting the semantics of input gradients is a feasible way towards adversarial robustness. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chan_What_It_Thinks_Is_Important_Is_Important_Robustness_Transfers_Through_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.05699 | https://www.youtube.com/watch?v=4xSnh5cadyU | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chan_What_It_Thinks_Is_Important_Is_Important_Robustness_Transfers_Through_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chan_What_It_Thinks_Is_Important_Is_Important_Robustness_Transfers_Through_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Chan_What_It_Thinks_CVPR_2020_supplemental.pdf | null | null |
Gum-Net: Unsupervised Geometric Matching for Fast and Accurate 3D Subtomogram Image Alignment and Averaging | Xiangrui Zeng, Min Xu | We propose a Geometric unsupervised matching Net-work (Gum-Net) for finding the geometric correspondence between two images with application to 3D subtomogram alignment and averaging. Subtomogram alignment is the most important task in cryo-electron tomography (cryo-ET), a revolutionary 3D imaging technique for visualizing the molecular organization of unperturbed cellular landscapes in single cells. However, subtomogram alignment and averaging are very challenging due to severe imaging limits such as noise and missing wedge effects. We introduce an end-to-end trainable architecture with three novel modules specifically designed for preserving feature spatial information and propagating feature matching information. The training is performed in a fully unsupervised fashion to optimize a matching metric. No ground truth transformation information nor category-level or instance-level matching supervision information is needed. After systematic assessments on six real and nine simulated datasets, we demonstrate that Gum-Net reduced the alignment error by 40 to 50% and improved the averaging resolution by 10%. Gum-Net also achieved 70 to 110 times speedup in practice with GPU acceleration compared to state-of-the-art subtomogram alignment methods. Our work is the first 3D unsupervised geometric matching method for images of strong transformation variation and high noise level. The training code, trained model, and datasets are available in our open-source software AITom. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zeng_Gum-Net_Unsupervised_Geometric_Matching_for_Fast_and_Accurate_3D_Subtomogram_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=Vq2x42Vdbj0 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zeng_Gum-Net_Unsupervised_Geometric_Matching_for_Fast_and_Accurate_3D_Subtomogram_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zeng_Gum-Net_Unsupervised_Geometric_Matching_for_Fast_and_Accurate_3D_Subtomogram_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zeng_Gum-Net_Unsupervised_Geometric_CVPR_2020_supplemental.pdf | null | null |
Deep Parametric Shape Predictions Using Distance Fields | Dmitriy Smirnov, Matthew Fisher, Vladimir G. Kim, Richard Zhang, Justin Solomon | Many tasks in graphics and vision demand machinery for converting shapes into consistent representations with sparse sets of parameters; these representations facilitate rendering, editing, and storage. When the source data is noisy or ambiguous, however, artists and engineers often manually construct such representations, a tedious and potentially time-consuming process. While advances in deep learning have been successfully applied to noisy geometric data, the task of generating parametric shapes has so far been difficult for these methods. Hence, we propose a new framework for predicting parametric shape primitives using deep learning. We use distance fields to transition between shape parameters like control points and input data on a pixel grid. We demonstrate efficacy on 2D and 3D tasks, including font vectorization and surface abstraction. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Smirnov_Deep_Parametric_Shape_Predictions_Using_Distance_Fields_CVPR_2020_paper.pdf | http://arxiv.org/abs/1904.08921 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Smirnov_Deep_Parametric_Shape_Predictions_Using_Distance_Fields_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Smirnov_Deep_Parametric_Shape_Predictions_Using_Distance_Fields_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Smirnov_Deep_Parametric_Shape_CVPR_2020_supplemental.pdf | null | null |
Correspondence-Free Material Reconstruction using Sparse Surface Constraints | Sebastian Weiss, Robert Maier, Daniel Cremers, Rudiger Westermann, Nils Thuerey | We present a method to infer physical material parameters, and even external boundaries, from the scanned motion of a homogeneous deformable object via the solution of an inverse problem. Parameters are estimated from real-world data sources such as sparse observations from a Kinect sensor without correspondences. We introduce a novel Lagrangian-Eulerian optimization formulation, including a cost function that penalizes differences to observations during an optimization run. This formulation matches correspondence-free, sparse observations from a single-view depth image with a finite element simulation of deformable bodies. In a number of tests using synthetic datasets and real-world measurements, we analyse the robustness of our approach and the convergence behavior of the numerical optimization scheme. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Weiss_Correspondence-Free_Material_Reconstruction_using_Sparse_Surface_Constraints_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=8W1KmqvKelU | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Weiss_Correspondence-Free_Material_Reconstruction_using_Sparse_Surface_Constraints_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Weiss_Correspondence-Free_Material_Reconstruction_using_Sparse_Surface_Constraints_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Weiss_Correspondence-Free_Material_Reconstruction_CVPR_2020_supplemental.zip | null | null |
PointPainting: Sequential Fusion for 3D Object Detection | Sourabh Vora, Alex H. Lang, Bassam Helou, Oscar Beijbom | Camera and lidar are important sensor modalities for robotics in general and self-driving cars in particular. The sensors provide complementary information offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature. In this work, we propose PointPainting: a sequential fusion method to fill this gap. PointPainting works by projecting lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point. The appended (painted) point cloud can then be fed to any lidar-only method. Experiments show large improvements on three different state-of-the art methods, Point-RCNN, VoxelNet and PointPillars on the KITTI and nuScenes datasets. The painted version of PointRCNN represents a new state of the art on the KITTI leaderboard for the bird's-eye view detection task. In ablation, we study how the effects of Painting depends on the quality and format of the semantic segmentation output, and demonstrate how latency can be minimized through pipelining. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Vora_PointPainting_Sequential_Fusion_for_3D_Object_Detection_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.10150 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Vora_PointPainting_Sequential_Fusion_for_3D_Object_Detection_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Vora_PointPainting_Sequential_Fusion_for_3D_Object_Detection_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Vora_PointPainting_Sequential_Fusion_CVPR_2020_supplemental.pdf | null | null |
Adaptive Subspaces for Few-Shot Learning | Christian Simon, Piotr Koniusz, Richard Nock, Mehrtash Harandi | Object recognition requires a generalization capability to avoid overfitting, especially when the samples are extremely few. Generalization from limited samples, usually studied under the umbrella of meta-learning, equips learning techniques with the ability to adapt quickly in dynamical environments and proves to be an essential aspect of life long learning. In this paper, we provide a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples. A subspace method is exploited as the central block of a dynamic classifier. We will empirically show that such modelling leads to robustness against perturbations (e.g., outliers) and yields competitive results on the task of supervised and semi-supervised few-shot classification. We also develop a discriminative form which can boost the accuracy even further. Our code is available at https://github.com/chrysts/dsn_fewshot | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Simon_Adaptive_Subspaces_for_Few-Shot_Learning_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Simon_Adaptive_Subspaces_for_Few-Shot_Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Simon_Adaptive_Subspaces_for_Few-Shot_Learning_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Simon_Adaptive_Subspaces_for_CVPR_2020_supplemental.pdf | null | null |
In Perfect Shape: Certifiably Optimal 3D Shape Reconstruction From 2D Landmarks | Heng Yang, Luca Carlone | We study the problem of 3D shape reconstruction from 2D landmarks extracted in a single image. We adopt the 3D deformable shape model and formulate the reconstruction as a joint optimization of the camera pose and the linear shape parameters. Our first contribution is to apply Lasserre's hierarchy of convex Sums-of-Squares (SOS) relaxations to solve the shape reconstruction problem and show that the SOS relaxation of minimum order 2 empirically solves the original non-convex problem exactly. Our second contribution is to exploit the structure of the polynomial in the objective function and find a reduced set of basis monomials for the SOS relaxation that significantly decreases the size of the resulting semidefinite program (SDP) without compromising its accuracy. These two contributions, to the best of our knowledge, lead to the first certifiably optimal solver for 3D shape reconstruction, that we name Shape*. Our third contribution is to add an outlier rejection layer to Shape[?] using a truncated least squares (TLS) robust cost function and leveraging graduated non-convexity to solve TLS without initialization. The result is a robust reconstruction algorithm, named Shape#, that tolerates a large amount of outlier measurements. We evaluate the performance of Shape[?] and Shape# in both simulated and real experiments, showing that Shape[?] outperforms local optimization and previous convex relaxation techniques, while Shape# achieves state-of-the-art performance and is robust against 70% outliers in the FG3DCar dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_In_Perfect_Shape_Certifiably_Optimal_3D_Shape_Reconstruction_From_2D_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.11924 | https://www.youtube.com/watch?v=Wl3GCE9pKnc | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_In_Perfect_Shape_Certifiably_Optimal_3D_Shape_Reconstruction_From_2D_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_In_Perfect_Shape_Certifiably_Optimal_3D_Shape_Reconstruction_From_2D_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yang_In_Perfect_Shape_CVPR_2020_supplemental.pdf | null | null |
Dynamic Graph Message Passing Networks | Li Zhang, Dan Xu, Anurag Arnab, Philip H.S. Torr | Modelling long-range dependencies is critical for scene understanding tasks in computer vision. Although CNNs have excelled in many vision tasks, they are still limited in capturing long-range structured relationships as they typically consist of layers of local kernels. A fully-connected graph is beneficial for such modelling, however, its computational overhead is prohibitive. We propose a dynamic graph message passing network, that significantly reduces the computational complexity compared to related works modelling a fully-connected graph. This is achieved by adaptively sampling nodes in the graph, conditioned on the input, for message passing. Based on the sampled nodes, we dynamically predict node-dependent filter weights and the affinity matrix for propagating information between them. Using this model, we show significant improvements with respect to strong, state-of-the-art baselines on three different tasks and backbone architectures. Our approach also outperforms fully-connected graphs while using substantially fewer floating-point operations and parameters. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Dynamic_Graph_Message_Passing_Networks_CVPR_2020_paper.pdf | http://arxiv.org/abs/1908.06955 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Dynamic_Graph_Message_Passing_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Dynamic_Graph_Message_Passing_Networks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhang_Dynamic_Graph_Message_CVPR_2020_supplemental.pdf | null | null |
Evaluating Weakly Supervised Object Localization Methods Right | Junsuk Choe, Seong Joon Oh, Seungho Lee, Sanghyuk Chun, Zeynep Akata, Hyunjung Shim | Weakly-supervised object localization (WSOL) has gained popularity over the last years for its promise to train localization models with only image-level labels. Since the seminal WSOL work of class activation mapping (CAM), the field has focused on how to expand the attention regions to cover objects more broadly and localize them better. However, these strategies rely on full localization supervision to validate hyperparameters and for model selection, which is in principle prohibited under the WSOL setup. In this paper, we argue that WSOL task is ill-posed with only image-level labels, and propose a new evaluation protocol where full supervision is limited to only a small held-out set not overlapping with the test set. We observe that, under our protocol, the five most recent WSOL methods have not made a major improvement over the CAM baseline. Moreover, we report that existing WSOL methods have not reached the few-shot learning baseline, where the full-supervision at validation time is used for model training instead. Based on our findings, we discuss some future directions for WSOL. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Choe_Evaluating_Weakly_Supervised_Object_Localization_Methods_Right_CVPR_2020_paper.pdf | http://arxiv.org/abs/2001.07437 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Choe_Evaluating_Weakly_Supervised_Object_Localization_Methods_Right_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Choe_Evaluating_Weakly_Supervised_Object_Localization_Methods_Right_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Choe_Evaluating_Weakly_Supervised_CVPR_2020_supplemental.pdf | null | null |
ClusterVO: Clustering Moving Instances and Estimating Visual Odometry for Self and Surroundings | Jiahui Huang, Sheng Yang, Tai-Jiang Mu, Shi-Min Hu | We present ClusterVO, a stereo Visual Odometry which simultaneously clusters and estimates the motion of both ego and surrounding rigid clusters/objects. Unlike previous solutions relying on batch input or imposing priors on scene structure or dynamic object models, ClusterVO is online, general and thus can be used in various scenarios including indoor scene understanding and autonomous driving. At the core of our system lies a multi-level probabilistic association mechanism and a heterogeneous Conditional Random Field (CRF) clustering approach combining semantic, spatial and motion information to jointly infer cluster segmentations online for every frame. The poses of camera and dynamic objects are instantly solved through a sliding-window optimization. Our system is evaluated on Oxford Multimotion and KITTI dataset both quantitatively and qualitatively, reaching comparable results to state-of-the-art solutions on both odometry and dynamic trajectory recovery. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Huang_ClusterVO_Clustering_Moving_Instances_and_Estimating_Visual_Odometry_for_Self_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.12980 | https://www.youtube.com/watch?v=p1aq0851-NU | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_ClusterVO_Clustering_Moving_Instances_and_Estimating_Visual_Odometry_for_Self_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_ClusterVO_Clustering_Moving_Instances_and_Estimating_Visual_Odometry_for_Self_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Huang_ClusterVO_Clustering_Moving_CVPR_2020_supplemental.zip | null | null |
DaST: Data-Free Substitute Training for Adversarial Attacks | Mingyi Zhou, Jing Wu, Yipeng Liu, Shuaicheng Liu, Ce Zhu | Machine learning models are vulnerable to adversarial examples. For the black-box setting, current substitute attacks need pre-trained models to generate adversarial examples. However, pre-trained models are hard to obtain in real-world tasks. In this paper, we propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks without the requirement of any real data. To achieve this, DaST utilizes specially designed generative adversarial networks (GANs) to train the substitute models. In particular, we design a multi-branch architecture and label-control loss for the generative model to deal with the uneven distribution of synthetic samples. The substitute model is then trained by the synthetic samples generated by the generative model, which are labeled by the attacked model subsequently. The experiments demonstrate the substitute models produced by DaST can achieve competitive performance compared with the baseline models which are trained by the same train set with attacked models. Additionally, to evaluate the practicability of the proposed method on the real-world task, we attack an online machine learning model on the Microsoft Azure platform. The remote model misclassifies 98.35% of the adversarial examples crafted by our method. To the best of our knowledge, we are the first to train a substitute model for adversarial attacks without any real data. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhou_DaST_Data-Free_Substitute_Training_for_Adversarial_Attacks_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.12703 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_DaST_Data-Free_Substitute_Training_for_Adversarial_Attacks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_DaST_Data-Free_Substitute_Training_for_Adversarial_Attacks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhou_DaST_Data-Free_Substitute_CVPR_2020_supplemental.zip | null | null |
Cross-Domain Semantic Segmentation via Domain-Invariant Interactive Relation Transfer | Fengmao Lv, Tao Liang, Xiang Chen, Guosheng Lin | Exploiting photo-realistic synthetic data to train semantic segmentation models has received increasing attention over the past years. However, the domain mismatch between synthetic and real images will cause a significant performance drop when the model trained with synthetic images is directly applied to real-world scenarios. In this paper, we propose a new domain adaptation approach, called Pivot Interaction Transfer (PIT). Our method mainly focuses on constructing pivot information that is common knowledge shared across domains as a bridge to promote the adaptation of semantic segmentation model from synthetic domains to real-world domains. Specifically, we first infer the image-level category information about the target images, which is then utilized to facilitate pixel-level transfer for semantic segmentation, with the assumption that the interactive relation between the image-level category information and the pixel-level semantic information is invariant across domains. To this end, we propose a novel multi-level region expansion mechanism that aligns both the image-level and pixel-level information. Comprehensive experiments on the adaptation from both GTAV and SYNTHIA to Cityscapes clearly demonstrate the superiority of our method. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lv_Cross-Domain_Semantic_Segmentation_via_Domain-Invariant_Interactive_Relation_Transfer_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=L57s4Xaad24 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Lv_Cross-Domain_Semantic_Segmentation_via_Domain-Invariant_Interactive_Relation_Transfer_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Lv_Cross-Domain_Semantic_Segmentation_via_Domain-Invariant_Interactive_Relation_Transfer_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Minimal Solutions for Relative Pose With a Single Affine Correspondence | Banglei Guan, Ji Zhao, Zhang Li, Fang Sun, Friedrich Fraundorfer | In this paper we present four cases of minimal solutions for two-view relative pose estimation by exploiting the affine transformation between feature points and we demonstrate efficient solvers for these cases. It is shown, that under the planar motion assumption or with knowledge of a vertical direction, a single affine correspondence is sufficient to recover the relative camera pose. The four cases considered are two-view planar relative motion for calibrated cameras as a closed-form and a least-squares solution, a closed-form solution for unknown focal length and the case of a known vertical direction. These algorithms can be used efficiently for outlier detection within a RANSAC loop and for initial motion estimation. All the methods are evaluated on both synthetic data and real-world datasets from the KITTI benchmark. The experimental results demonstrate that our methods outperform comparable state-of-the-art methods in accuracy with the benefit of a reduced number of needed RANSAC iterations. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Guan_Minimal_Solutions_for_Relative_Pose_With_a_Single_Affine_Correspondence_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.10776 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Guan_Minimal_Solutions_for_Relative_Pose_With_a_Single_Affine_Correspondence_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Guan_Minimal_Solutions_for_Relative_Pose_With_a_Single_Affine_Correspondence_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Guan_Minimal_Solutions_for_CVPR_2020_supplemental.pdf | null | null |
Agriculture-Vision: A Large Aerial Image Database for Agricultural Pattern Analysis | Mang Tik Chiu, Xingqian Xu, Yunchao Wei, Zilong Huang, Alexander G. Schwing, Robert Brunner, Hrant Khachatrian, Hovnatan Karapetyan, Ivan Dozier, Greg Rose, David Wilson, Adrian Tudor, Naira Hovakimyan, Thomas S. Huang, Honghui Shi | The success of deep learning in visual recognition tasks has driven advancements in multiple fields of research. Particularly, increasing attention has been drawn towards its application in agriculture. Nevertheless, while visual pattern recognition on farmlands carries enormous economic values, little progress has been made to merge computer vision and crop sciences due to the lack of suitable agricultural image datasets. Meanwhile, problems in agriculture also pose new challenges in computer vision. For example, semantic segmentation of aerial farmland images requires inference over extremely large-size images with extreme annotation sparsity. These challenges are not present in most of the common object datasets, and we show that they are more challenging than many other aerial image datasets. To encourage research in computer vision for agriculture, we present Agriculture-Vision: a large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns. We collected 94,986 high-quality aerial images from 3,432 farmlands across the US, where each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel. We annotate nine types of field anomaly patterns that are most important to farmers. As a pilot study of aerial agricultural semantic segmentation, we perform comprehensive experiments using popular semantic segmentation models; we also propose an effective model designed for aerial agricultural pattern recognition. Our experiments demonstrate several challenges Agriculture-Vision poses to both the computer vision and agriculture communities. Future versions of this dataset will include even more aerial images, anomaly patterns and image channels. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chiu_Agriculture-Vision_A_Large_Aerial_Image_Database_for_Agricultural_Pattern_Analysis_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=fcxU6CSVQfA | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chiu_Agriculture-Vision_A_Large_Aerial_Image_Database_for_Agricultural_Pattern_Analysis_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chiu_Agriculture-Vision_A_Large_Aerial_Image_Database_for_Agricultural_Pattern_Analysis_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
ActiveMoCap: Optimized Viewpoint Selection for Active Human Motion Capture | Sena Kiciroglu, Helge Rhodin, Sudipta N. Sinha, Mathieu Salzmann, Pascal Fua | The accuracy of monocular 3D human pose estimation depends on the viewpoint from which the image is captured. While freely moving cameras, such as on drones, provide control over this viewpoint, automatically positioning them at the location which will yield the highest accuracy remains an open problem. This is the problem that we address in this paper. Specifically, given a short video sequence, we introduce an algorithm that predicts which viewpoints should be chosen to capture future frames so as to maximize 3D human pose estimation accuracy. The key idea underlying our approach is a method to estimate the uncertainty of the 3D body pose estimates. We integrate several sources of uncertainty, originating from deep learning based regressors and temporal smoothness. Our motion planner yields improved 3D body pose estimates and outperforms or matches existing ones that are based on person following and orbiting. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kiciroglu_ActiveMoCap_Optimized_Viewpoint_Selection_for_Active_Human_Motion_Capture_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.08568 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Kiciroglu_ActiveMoCap_Optimized_Viewpoint_Selection_for_Active_Human_Motion_Capture_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Kiciroglu_ActiveMoCap_Optimized_Viewpoint_Selection_for_Active_Human_Motion_Capture_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Kiciroglu_ActiveMoCap_Optimized_Viewpoint_CVPR_2020_supplemental.pdf | null | null |
ScrabbleGAN: Semi-Supervised Varying Length Handwritten Text Generation | Sharon Fogel, Hadar Averbuch-Elor, Sarel Cohen, Shai Mazor, Roee Litman | Optical character recognition (OCR) systems performance have improved significantly in the deep learning era. This is especially true for handwritten text recognition (HTR), where each author has a unique style, unlike printed text, where the variation is smaller by design. That said, deep learning based HTR is limited, as in every other task, by the number of training examples. Gathering data is a challenging and costly task, and even more so, the labeling task that follows, of which we focus here. One possible approach to reduce the burden of data annotation is semi-supervised learning. Semi supervised methods use, in addition to labeled data, some unlabeled samples to improve performance, compared to fully supervised ones. Consequently, such methods may adapt to unseen images during test time. We present ScrabbleGAN, a semi-supervised approach to synthesize handwritten text images that are versatile both in style and lexicon. ScrabbleGAN relies on a novel generative model which can generate images of words with an arbitrary length. We show how to operate our approach in a semi-supervised manner, enjoying the aforementioned benefits such as performance boost over state of the art supervised HTR. Furthermore, our generator can manipulate the resulting text style. This allows us to change, for instance, whether the text is cursive, or how thin is the pen stroke. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Fogel_ScrabbleGAN_Semi-Supervised_Varying_Length_Handwritten_Text_Generation_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Fogel_ScrabbleGAN_Semi-Supervised_Varying_Length_Handwritten_Text_Generation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Fogel_ScrabbleGAN_Semi-Supervised_Varying_Length_Handwritten_Text_Generation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Fogel_ScrabbleGAN_Semi-Supervised_Varying_CVPR_2020_supplemental.pdf | null | null |
Scalability in Perception for Autonomous Driving: Waymo Open Dataset | Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Yu Zhang, Jonathon Shlens, Zhifeng Chen, Dragomir Anguelov | The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing self-driving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the over-all viability of the technology. In an effort to help align the research community's contributions with real-world self-driving problems, we introduce a new large scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest camera+LiDAR dataset available based on our proposed diversity metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Sun_Scalability_in_Perception_for_Autonomous_Driving_Waymo_Open_Dataset_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.04838 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Sun_Scalability_in_Perception_for_Autonomous_Driving_Waymo_Open_Dataset_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Sun_Scalability_in_Perception_for_Autonomous_Driving_Waymo_Open_Dataset_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Multi-Modal Domain Adaptation for Fine-Grained Action Recognition | Jonathan Munro, Dima Damen | Fine-grained action recognition datasets exhibit environmental bias, where multiple video sequences are captured from a limited number of environments. Training a model in one environment and deploying in another results in a drop in performance due to an unavoidable domain shift. Unsupervised Domain Adaptation (UDA) approaches have frequently utilised adversarial training between the source and target domains. However, these approaches have not explored the multi-modal nature of video within each domain. In this work we exploit the correspondence of modalities as a self-supervised alignment approach for UDA in addition to adversarial alignment (Fig. 1). We test our approach on three kitchens from the large-scale EPIC-Kitchens dataset, using two modalities commonly employed for action recognition: RGB and Optical Flow. We show that multi-modal self-supervision alone improves the performance over source-only training by 2.4% on average. We then combine adversarial training with multi-modal self-supervision, showing that our approach outperforms other UDA methods by 3%. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Munro_Multi-Modal_Domain_Adaptation_for_Fine-Grained_Action_Recognition_CVPR_2020_paper.pdf | http://arxiv.org/abs/2001.09691 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Munro_Multi-Modal_Domain_Adaptation_for_Fine-Grained_Action_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Munro_Multi-Modal_Domain_Adaptation_for_Fine-Grained_Action_Recognition_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
A Sparse Resultant Based Method for Efficient Minimal Solvers | Snehal Bhayani, Zuzana Kukelova, Janne Heikkila | Many computer vision applications require robust and efficient estimation of camera geometry. The robust estimation is usually based on solving camera geometry problems from a minimal number of input data measurements, i.e. solving minimal problems in a RANSAC framework. Minimal problems often result in complex systems of polynomial equations. Many state-of-the-art efficient polynomial solvers to these problems are based on Grobner basis and the action-matrix method that has been automatized and highly optimized in recent years. In this paper we study an alternative algebraic method for solving systems of polynomial equations, i.e., the sparse resultant-based method and propose a novel approach to convert the resultant constraint to an eigenvalue problem. This technique can significantly improve the efficiency and stability of existing resultant-based solvers. We applied our new resultant-based method to a large variety of computer vision problems and show that for most of the considered problems, the new method leads to solvers that are the same size as the the best available Grobner basis solvers and of similar accuracy. For some problems the new sparse-resultant based method leads to even smaller and more stable solvers than the state-of-the-art Grobner basis solvers. Our new method can be fully automatized and incorporated into existing tools for automatic generation of efficient polynomial solvers and as such it represents a competitive alternative to popular Grobner basis methods for minimal problems in computer vision. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Bhayani_A_Sparse_Resultant_Based_Method_for_Efficient_Minimal_Solvers_CVPR_2020_paper.pdf | http://arxiv.org/abs/1912.10268 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Bhayani_A_Sparse_Resultant_Based_Method_for_Efficient_Minimal_Solvers_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Bhayani_A_Sparse_Resultant_Based_Method_for_Efficient_Minimal_Solvers_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Bhayani_A_Sparse_Resultant_CVPR_2020_supplemental.pdf | null | null |
DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection | Liming Jiang, Ren Li, Wayne Wu, Chen Qian, Chen Change Loy | We present our on-going effort of constructing a large- scale benchmark for face forgery detection. The first version of this benchmark, DeeperForensics-1.0, represents the largest face forgery detection dataset by far, with 60, 000 videos constituted by a total of 17.6 million frames, 10 times larger than existing datasets of the same kind. Extensive real-world perturbations are applied to obtain a more challenging benchmark of larger scale and higher diversity. All source videos in DeeperForensics-1.0 are carefully collected, and fake videos are generated by a newly proposed end-to-end face swapping framework. The quality of generated videos outperforms those in existing datasets, validated by user studies. The benchmark features a hidden test set, which contains manipulated videos achieving high deceptive scores in human evaluations. We further contribute a comprehensive study that evaluates five representative detection baselines and make a thorough analysis of different settings. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jiang_DeeperForensics-1.0_A_Large-Scale_Dataset_for_Real-World_Face_Forgery_Detection_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=ZLWKZ6ej-AQ | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_DeeperForensics-1.0_A_Large-Scale_Dataset_for_Real-World_Face_Forgery_Detection_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Jiang_DeeperForensics-1.0_A_Large-Scale_Dataset_for_Real-World_Face_Forgery_Detection_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Jiang_DeeperForensics-1.0_A_Large-Scale_CVPR_2020_supplemental.pdf | null | null |
Shape Reconstruction by Learning Differentiable Surface Representations | Jan Bednarik, Shaifali Parashar, Erhan Gundogdu, Mathieu Salzmann, Pascal Fua | Generative models that produce point clouds have emerged as a powerful tool to represent 3D surfaces, and the best current ones rely on learning an ensemble of parametric representations. Unfortunately, they offer no control over the deformations of the surface patches that form the ensemble and thus fail to prevent them from either overlapping or collapsing into single points or lines. As a consequence, computing shape properties such as surface normals and curvatures becomes difficult and unreliable. In this paper, we show that we can exploit the inherent differentiability of deep networks to leverage differential surface properties during training so as to prevent patch collapse and strongly reduce patch overlap. Furthermore, this lets us reliably compute quantities such as surface normals and curvatures. We will demonstrate on several tasks that this yields more accurate surface reconstructions than the state-of-the-art methods in terms of normals estimation and amount of collapsed and overlapped patches. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Bednarik_Shape_Reconstruction_by_Learning_Differentiable_Surface_Representations_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.11227 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Bednarik_Shape_Reconstruction_by_Learning_Differentiable_Surface_Representations_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Bednarik_Shape_Reconstruction_by_Learning_Differentiable_Surface_Representations_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Bednarik_Shape_Reconstruction_by_CVPR_2020_supplemental.pdf | null | null |
Temporal Pyramid Network for Action Recognition | Ceyuan Yang, Yinghao Xu, Jianping Shi, Bo Dai, Bolei Zhou | Visual tempo characterizes the dynamics and the temporal scale of an action. Modeling such visual tempos of different actions facilitates their recognition. Previous works often capture the visual tempo through sampling raw videos at multiple rates and constructing an input-level frame pyramid, which usually requires a costly multi-branch network to handle. In this work we propose a generic Temporal Pyramid Network (TPN) at the feature-level, which can be flexibly integrated into 2D or 3D backbone networks in a plug-and-play manner. Two essential components of TPN, the source of features and the fusion of features, form a feature hierarchy for the backbone so that it can capture action instances at various tempos. TPN also shows consistent improvements over other challenging baselines on several action recognition datasets. Specifically, when equipped with TPN, the 3D ResNet-50 with dense sampling obtains a 2% gain on the validation set of Kinetics-400. A further analysis also reveals that TPN gains most of its improvements on action classes that have large variances in their visual tempos, validating the effectiveness of TPN. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_Temporal_Pyramid_Network_for_Action_Recognition_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.03548 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Temporal_Pyramid_Network_for_Action_Recognition_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Temporal_Pyramid_Network_for_Action_Recognition_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation | Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu | Online semantic 3D segmentation in company with real-time RGB-D reconstruction poses special challenges such as how to perform 3D convolution directly over the progressively fused 3D geometric data, and how to smartly fuse information from frame to frame. We propose a novel fusion-aware 3D point convolution which operates directly on the geometric surface being reconstructed and exploits effectively the inter-frame correlation for high-quality 3D feature learning. This is enabled by a dedicated dynamic data structure that organizes the online acquired point cloud with local-global trees. Globally, we compile the online reconstructed 3D points into an incrementally growing coordinate interval tree, enabling fast point insertion and neighborhood query. Locally, we maintain the neighborhood information for each point using an octree whose construction benefits from the fast query of the global tree. The local octrees facilitate efficient surface-aware point convolution. Both levels of trees update dynamically and help the 3D convolution effectively exploits the temporal coherence for effective information fusion across RGB-D frames. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Fusion-Aware_Point_Convolution_for_Online_Semantic_3D_Scene_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.06233 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Fusion-Aware_Point_Convolution_for_Online_Semantic_3D_Scene_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Fusion-Aware_Point_Convolution_for_Online_Semantic_3D_Scene_Segmentation_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Skeleton-Based Action Recognition With Shift Graph Convolutional Network | Ke Cheng, Yifan Zhang, Xiangyu He, Weihan Chen, Jian Cheng, Hanqing Lu | Action recognition with skeleton data is attracting more attention in computer vision. Recently, graph convolutional networks (GCNs), which model the human body skeletons as spatiotemporal graphs, have obtained remarkable performance. However, the computational complexity of GCN-based methods are pretty heavy, typically over 15 GFLOPs for one action sample. Recent works even reach about 100 GFLOPs. Another shortcoming is that the receptive fields of both spatial graph and temporal graph are inflexible. Although some works enhance the expressiveness of spatial graph by introducing incremental adaptive modules, their performance is still limited by regular GCN structures. In this paper, we propose a novel shift graph convolutional network (Shift-GCN) to overcome both shortcomings. Instead of using heavy regular graph convolutions, our Shift-GCN is composed of novel shift graph operations and lightweight point-wise convolutions, where the shift graph operations provide flexible receptive fields for both spatial graph and temporal graph. On three datasets for skeleton-based action recognition, the proposed Shift-GCN notably exceeds the state-of-the-art methods with more than 10 times less computational complexity. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Cheng_Skeleton-Based_Action_Recognition_With_Shift_Graph_Convolutional_Network_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Cheng_Skeleton-Based_Action_Recognition_With_Shift_Graph_Convolutional_Network_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Cheng_Skeleton-Based_Action_Recognition_With_Shift_Graph_Convolutional_Network_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Cheng_Skeleton-Based_Action_Recognition_CVPR_2020_supplemental.pdf | null | null |
How Does Noise Help Robustness? Explanation and Exploration under the Neural SDE Framework | Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh | Neural Ordinary Differential Equation (Neural ODE) has been proposed as a continuous approximation to the ResNet architecture. Some commonly used regularization mechanisms in discrete neural networks (e.g., dropout, Gaussian noise) are missing in current Neural ODE networks. In this paper, we propose a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE), which naturally incorporates various commonly used regularization mechanisms based on random noise injection. For regularization purposes, our framework includes multiple types of noise patterns, such as dropout, additive, and multiplicative noise, which are common in plain neural networks. We provide some theoretical analyses explaining the improved robustness of our models against input perturbations. Furthermore, we demonstrate that the Neural SDE network can achieve better generalization than the Neural ODE and is more resistant to adversarial and non-adversarial input perturbations. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_How_Does_Noise_Help_Robustness_Explanation_and_Exploration_under_the_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=LX_xW3Hhddg | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_How_Does_Noise_Help_Robustness_Explanation_and_Exploration_under_the_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_How_Does_Noise_Help_Robustness_Explanation_and_Exploration_under_the_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Liu_How_Does_Noise_CVPR_2020_supplemental.pdf | null | null |
Context-Aware Group Captioning via Self-Attention and Contrastive Features | Zhuowan Li, Quan Tran, Long Mai, Zhe Lin, Alan L. Yuille | While image captioning has progressed rapidly, existing works focus mainly on describing single images. In this paper, we introduce a new task, context-aware group captioning, which aims to describe a group of target images in the context of another group of related reference images. Context-aware group captioning requires not only summarizing information from both the target and reference image group but also contrasting between them. To solve this problem, we propose a framework combining self-attention mechanism with contrastive feature construction to effectively summarize common information from each image group while capturing discriminative information between them. To build the dataset for this task, we propose to group the images and generate the group captions based on single image captions using scene graphs matching. Our datasets are constructed on top of the public Conceptual Captions dataset and our new Stock Captions dataset. Experiments on the two datasets show the effectiveness of our method on this new task. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Context-Aware_Group_Captioning_via_Self-Attention_and_Contrastive_Features_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.03708 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Context-Aware_Group_Captioning_via_Self-Attention_and_Contrastive_Features_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Context-Aware_Group_Captioning_via_Self-Attention_and_Contrastive_Features_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Context-Aware_Group_Captioning_CVPR_2020_supplemental.pdf | null | null |
Learning to Forget for Meta-Learning | Sungyong Baik, Seokil Hong, Kyoung Mu Lee | Few-shot learning is a challenging problem where the goal is to achieve generalization from only few examples. Model-agnostic meta-learning (MAML) tackles the problem by formulating prior knowledge as a common initialization across tasks, which is then used to quickly adapt to unseen tasks. However, forcibly sharing an initialization can lead to conflicts among tasks and the compromised (undesired by tasks) location on optimization landscape, thereby hindering the task adaptation. Further, we observe that the degree of conflict differs among not only tasks but also layers of a neural network. Thus, we propose task-and-layer-wise attenuation on the compromised initialization to reduce its influence. As the attenuation dynamically controls (or selectively forgets) the influence of prior knowledge for a given task and each layer, we name our method as L2F (Learn to Forget). The experimental results demonstrate that the proposed method provides faster adaptation and greatly improves the performance. Furthermore, L2F can be easily applied and improve other state-of-the-art MAML-based frameworks, illustrating its simplicity and generalizability. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Baik_Learning_to_Forget_for_Meta-Learning_CVPR_2020_paper.pdf | http://arxiv.org/abs/1906.05895 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Baik_Learning_to_Forget_for_Meta-Learning_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Baik_Learning_to_Forget_for_Meta-Learning_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Baik_Learning_to_Forget_CVPR_2020_supplemental.pdf | null | null |
A Self-supervised Approach for Adversarial Robustness | Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Fatih Porikli | Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems e.g., for classification, segmentation and object detection. The vulnerability of DNNs against such attacks can prove a major roadblock towards their real-world deployment. Transferability of adversarial examples demand generalizable defenses that can provide cross-task protection. Adversarial training that enhances robustness by modifying target model's parameters lacks such generalizability. On the other hand, different input processing based defenses fall short in the face of continuously evolving attacks. In this paper, we take the first step to combine the benefits of both approaches and propose a self-supervised adversarial training mechanism in the input space. By design, our defense is a generalizable approach and provides significant robustness against the unseen adversarial attacks (e.g. by reducing the success rate of translation-invariant ensemble attack from 82.6% to 31.9% in comparison to previous state-of-the-art). It can be deployed as a plug-and-play solution to protect a variety of vision systems, as we demonstrate for the case of classification, segmentation and detection. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Naseer_A_Self-supervised_Approach_for_Adversarial_Robustness_CVPR_2020_paper.pdf | http://arxiv.org/abs/2006.04924 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Naseer_A_Self-supervised_Approach_for_Adversarial_Robustness_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Naseer_A_Self-supervised_Approach_for_Adversarial_Robustness_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Naseer_A_Self-supervised_Approach_CVPR_2020_supplemental.pdf | null | null |
Multimodal Future Localization and Emergence Prediction for Objects in Egocentric View With a Reachability Prior | Osama Makansi, Ozgun Cicek, Kevin Buchicchio, Thomas Brox | In this paper, we investigate the problem of anticipating future dynamics, particularly the future location of other vehicles and pedestrians, in the view of a moving vehicle. We approach two fundamental challenges: (1) the partial visibility due to the egocentric view with a single RGB camera and considerable field-of-view change due to the egomotion of the vehicle; (2) the multimodality of the distribution of future states. In contrast to many previous works, we do not assume structural knowledge from maps. We rather estimate a reachability prior for certain classes of objects from the semantic map of the present image and propagate it into the future using the planned egomotion. Experiments show that the reachability prior combined with multi-hypotheses learning improves multimodal prediction of the future location of tracked objects and, for the first time, the emergence of new objects. We also demonstrate promising zero-shot transfer to unseen datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Makansi_Multimodal_Future_Localization_and_Emergence_Prediction_for_Objects_in_Egocentric_CVPR_2020_paper.pdf | http://arxiv.org/abs/2006.04700 | https://www.youtube.com/watch?v=_9Ml5IFwbSY | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Makansi_Multimodal_Future_Localization_and_Emergence_Prediction_for_Objects_in_Egocentric_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Makansi_Multimodal_Future_Localization_and_Emergence_Prediction_for_Objects_in_Egocentric_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Makansi_Multimodal_Future_Localization_CVPR_2020_supplemental.zip | null | null |
OccuSeg: Occupancy-Aware 3D Instance Segmentation | Lei Han, Tian Zheng, Lan Xu, Lu Fang | 3D instance segmentation, with a variety of applications in robotics and augmented reality, is in large demands these days. Unlike 2D images that are projective observations of the environment, 3D models provide metric reconstruction of the scenes without occlusion or scale ambiguity. In this paper, we define "3D occupancy size", as the number of voxels occupied by each instance. It owns advantages of robustness in prediction, on which basis, OccuSeg, an occupancy-aware 3D instance segmentation scheme is proposed. Our multi-task learning produces both occupancy signal and embedding representations, where the training of spatial and feature embeddings varies with their difference in scale-aware. Our clustering scheme benefits from the reliable comparison between the predicted occupancy size and the clustered occupancy size, which encourages hard samples being correctly clustered and avoids over segmentation. The proposed approach achieves state-of-theart performance on 3 real-world datasets, i.e. ScanNetV2, S3DIS and SceneNN, while maintaining high efficiency. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Han_OccuSeg_Occupancy-Aware_3D_Instance_Segmentation_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.06537 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Han_OccuSeg_Occupancy-Aware_3D_Instance_Segmentation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Han_OccuSeg_Occupancy-Aware_3D_Instance_Segmentation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Han_OccuSeg_Occupancy-Aware_3D_CVPR_2020_supplemental.pdf | null | null |
RevealNet: Seeing Behind Objects in RGB-D Scans | Ji Hou, Angela Dai, Matthias Niessner | During 3D reconstruction, it is often the case that people cannot scan each individual object from all views, resulting in missing geometry in the captured scan. This missing geometry can be fundamentally limiting for many applications, e.g., a robot needs to know the unseen geometry to perform a precise grasp on an object. Thus, we introduce the task of semantic instance completion: from an incomplete RGB-D scan of a scene, we aim to detect the individual object instances and infer their complete object geometry. This will open up new possibilities for interactions with objects in a scene, for instance for virtual or robotic agents. We tackle this problem by introducing RevealNet, a new data-driven approach that jointly detects object instances and predicts their complete geometry. This enables a semantically meaningful decomposition of a scanned scene into individual, complete 3D objects, including hidden and unobserved object parts. RevealNet is an end-to-end 3D neural network architecture that leverages joint color and geometry feature learning. The fully-convolutional nature of our 3D network enables efficient inference of semantic instance completion for 3D scans at scale of large indoor environments in a single forward pass. We show that predicting complete object geometry improves both 3D detection and instance segmentation performance. We evaluate on both real and synthetic scan benchmark data for the new task, where we outperform state-of-the-art approaches by over 15 in mAP@0.5 on ScanNet, and over 18 in mAP@0.5 on SUNCG. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hou_RevealNet_Seeing_Behind_Objects_in_RGB-D_Scans_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Hou_RevealNet_Seeing_Behind_Objects_in_RGB-D_Scans_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Hou_RevealNet_Seeing_Behind_Objects_in_RGB-D_Scans_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Hou_RevealNet_Seeing_Behind_CVPR_2020_supplemental.pdf | null | null |
Deep Optics for Single-Shot High-Dynamic-Range Imaging | Christopher A. Metzler, Hayato Ikoma, Yifan Peng, Gordon Wetzstein | High-dynamic-range (HDR) imaging is crucial for many applications. Yet, acquiring HDR images with a single shot remains a challenging problem. Whereas modern deep learning approaches are successful at hallucinating plausible HDR content from a single low-dynamic-range (LDR) image, saturated scene details often cannot be faithfully recovered. Inspired by recent deep optical imaging approaches, we interpret this problem as jointly training an optical encoder and electronic decoder where the encoder is parameterized by the point spread function (PSF) of the lens, the bottleneck is the sensor with a limited dynamic range, and the decoder is a convolutional neural network (CNN). The lens surface is then jointly optimized with the CNN in a training phase; we fabricate this optimized optical element and attach it as a hardware add-on to a conventional camera during inference. In extensive simulations and with a physical prototype, we demonstrate that this end-to-end deep optical imaging approach to single-shot HDR imaging outperforms both purely CNN-based approaches and other PSF engineering approaches. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Metzler_Deep_Optics_for_Single-Shot_High-Dynamic-Range_Imaging_CVPR_2020_paper.pdf | http://arxiv.org/abs/1908.00620 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Metzler_Deep_Optics_for_Single-Shot_High-Dynamic-Range_Imaging_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Metzler_Deep_Optics_for_Single-Shot_High-Dynamic-Range_Imaging_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Metzler_Deep_Optics_for_CVPR_2020_supplemental.pdf | null | null |
HybridPose: 6D Object Pose Estimation Under Hybrid Representations | Chen Song, Jiaru Song, Qixing Huang | We introduce HybridPose, a novel 6D object pose estimation approach. HybridPose utilizes a hybrid intermediate representation to express different geometric information in the input image, including keypoints, edge vectors, and symmetry correspondences. Compared to a unitary representation, our hybrid representation allows pose regression to exploit more and diverse features when one type of predicted representation is inaccurate (e.g., because of occlusion). Different intermediate representations used by HybridPose can all be predicted by the same simple neural network, and outliers in predicted intermediate representations are filtered by a robust regression module. Compared to state-of-the-art pose estimation approaches, HybridPose is comparable in running time and is significantly more accurate. For example, on Occlusion Linemod dataset, our method achieves a prediction speed of 30 fps with a mean ADD(-S) accuracy of 79.2%, representing a 67.4% improvement from the current state-of-the-art approach. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Song_HybridPose_6D_Object_Pose_Estimation_Under_Hybrid_Representations_CVPR_2020_paper.pdf | http://arxiv.org/abs/2001.01869 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Song_HybridPose_6D_Object_Pose_Estimation_Under_Hybrid_Representations_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Song_HybridPose_6D_Object_Pose_Estimation_Under_Hybrid_Representations_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Song_HybridPose_6D_Object_CVPR_2020_supplemental.pdf | null | null |
Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks | Jianhe Yuan, Zhihai He | Effective defense of deep neural networks against adversarial attacks remains a challenging problem, especially under powerful white-box attacks. In this paper, we develop a new method called ensemble generative cleaning with feedback loops (EGC-FL) for effective defense of deep neural networks. The proposed EGC-FL method is based on two central ideas. First, we introduce a transformed deadzone layer into the defense network, which consists of an orthonormal transform and a deadzone-based activation function, to destroy the sophisticated noise pattern of adversarial attacks. Second, by constructing a generative cleaning network with a feedback loop, we are able to generate an ensemble of diverse estimations of the original clean image. We then learn a network to fuse this set of diverse estimations together to restore the original image. Our extensive experimental results demonstrate that our approach improves the state-of-art by large margins in both white-box and black-box attacks. It significantly improves the classification accuracy for white-box PGD attacks upon the second best method by more than 29% on the SVHN dataset and more than 39% on the challenging CIFAR-10 dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yuan_Ensemble_Generative_Cleaning_With_Feedback_Loops_for_Defending_Adversarial_Attacks_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.11273 | https://www.youtube.com/watch?v=69KTm_utsC4 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yuan_Ensemble_Generative_Cleaning_With_Feedback_Loops_for_Defending_Adversarial_Attacks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yuan_Ensemble_Generative_Cleaning_With_Feedback_Loops_for_Defending_Adversarial_Attacks_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Spatial-Temporal Graph Convolutional Network for Video-Based Person Re-Identification | Jinrui Yang, Wei-Shi Zheng, Qize Yang, Ying-Cong Chen, Qi Tian | While video-based person re-identification (Re-ID) has drawn increasing attention and made great progress in recent years, it is still very challenging to effectively overcome the occlusion problem and the visual ambiguity problem for visually similar negative samples. On the other hand, we observe that different frames of a video can provide complementary information for each other, and the structural information of pedestrians can provide extra discriminative cues for appearance features. Thus, modeling the temporal relations of different frames and the spatial relations within a frame has the potential for solving the above problems. In this work, we propose a novel Spatial-Temporal Graph Convolutional Network (STGCN) to solve these problems. The STGCN includes two GCN branches, a spatial one and a temporal one. The spatial branch extracts structural information of a human body. The temporal branch mines discriminative cues from adjacent frames. By jointly optimizing these branches, our model extracts robust spatial-temporal information that is complementary with appearance information. As shown in the experiments, our model achieves state-of-the-art results on MARS and DukeMTMC-VideoReID datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_Spatial-Temporal_Graph_Convolutional_Network_for_Video-Based_Person_Re-Identification_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=gvnINhYEsqU | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Spatial-Temporal_Graph_Convolutional_Network_for_Video-Based_Person_Re-Identification_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Spatial-Temporal_Graph_Convolutional_Network_for_Video-Based_Person_Re-Identification_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yang_Spatial-Temporal_Graph_Convolutional_CVPR_2020_supplemental.pdf | null | null |
The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks | Yuheng Zhang, Ruoxi Jia, Hengzhi Pei, Wenxiao Wang, Bo Li, Dawn Song | This paper studies model-inversion attacks, in which the access to a model is abused to infer information about the training data. Since its first introduction by [??], such attacks have raised serious concerns given that training data usually contain privacy sensitive information. Thus far, successful model-inversion attacks have only been demonstrated on simple models, such as linear regression and logistic regression. Previous attempts to invert neural networks, even the ones with simple architectures, have failed to produce convincing results. Here we present a novel attack method, termed the generative model-inversion attack, which can invert deep neural networks with high success rates. Rather than reconstructing private training data from scratch, we leverage partial public information, which can be very generic, to learn a distributional prior via generative adversarial networks (GANs) and use it to guide the inversion process. Moreover, we theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin---highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks. Our extensive experiments demonstrate that the proposed attack improves identification accuracy over the existing work by about 75% for reconstructing face images from a state-of-the-art face recognition classifier. We also show that differential privacy, in its canonical form, is of little avail to defend against our attacks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.07135 | https://www.youtube.com/watch?v=_g-oXYMhz4M | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhang_The_Secret_Revealer_CVPR_2020_supplemental.pdf | null | null |
Video Modeling With Correlation Networks | Heng Wang, Du Tran, Lorenzo Torresani, Matt Feiszli | Motion is a salient cue to recognize actions in video. Modern action recognition models leverage motion information either explicitly by using optical flow as input or implicitly by means of 3D convolutional filters that simultaneously capture appearance and motion information. This paper proposes an alternative approach based on a learnable correlation operator that can be used to establish frame-to-frame matches over convolutional feature maps in the different layers of the network. The proposed architecture enables the fusion of this explicit temporal matching information with traditional appearance cues captured by 2D convolution. Our correlation network compares favorably with widely-used 3D CNNs for video modeling, and achieves competitive results over the prominent two-stream network while being much faster to train. We empirically demonstrate that correlation networks produce strong results on a variety of video datasets, and outperform the state of the art on four popular benchmarks for action recognition: Kinetics, Something-Something, Diving48, and Sports1M. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Video_Modeling_With_Correlation_Networks_CVPR_2020_paper.pdf | http://arxiv.org/abs/1906.03349 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Video_Modeling_With_Correlation_Networks_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Video_Modeling_With_Correlation_Networks_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Understanding Road Layout From Videos as a Whole | Buyu Liu, Bingbing Zhuang, Samuel Schulter, Pan Ji, Manmohan Chandraker | In this paper, we address the problem of inferring the layout of complex road scenes from video sequences. To this end, we formulate it as a top-view road attributes prediction problem and our goal is to predict these attributes for each frame both accurately and consistently. In contrast to prior work, we exploit the following three novel aspects: leveraging camera motions in videos, including context cues and incorporating long-term video information. Specifically, we introduce a model that aims to enforce prediction consistency in videos. Our model consists of one LSTM and one Feature Transform Module (FTM). The former implicitly incorporates the consistency constraint with its hidden states, and the latter explicitly takes the camera motion into consideration when aggregating information along videos. Moreover, we propose to incorporate context information by introducing road participants, e.g. objects, into our model. When the entire video sequence is available, our model is also able to encode both local and global cues, e.g. information from both past and future frames. Experiments on two data sets show that: (1) Incorporating either global or contextual cues improves the prediction accuracy and leveraging both gives the best performance. (2) Introducing the LSTM and FTM modules improves the prediction consistency in videos. (3) The proposed method outperforms the SOTA by a large margin. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Understanding_Road_Layout_From_Videos_as_a_Whole_CVPR_2020_paper.pdf | http://arxiv.org/abs/2007.00822 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Understanding_Road_Layout_From_Videos_as_a_Whole_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Understanding_Road_Layout_From_Videos_as_a_Whole_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Liu_Understanding_Road_Layout_CVPR_2020_supplemental.zip | null | null |
Universal Physical Camouflage Attacks on Object Detectors | Lifeng Huang, Chengying Gao, Yuyin Zhou, Cihang Xie, Alan L. Yuille, Changqing Zou, Ning Liu | In this paper, we study physical adversarial attacks on object detectors in the wild. Previous works mostly craft instance-dependent perturbations only for rigid or planar objects. To this end, we propose to learn an adversarial pattern to effectively attack all instances belonging to the same object category, referred to as Universal Physical Camouflage Attack (UPC). Concretely, UPC crafts camouflage by jointly fooling the region proposal network, as well as misleading the classifier and the regressor to output errors. In order to make UPC effective for non-rigid or non-planar objects, we introduce a set of transformations for mimicking deformable properties. We additionally impose optimization constraint to make generated patterns look natural to human observers. To fairly evaluate the effectiveness of different physical-world attacks, we present the first standardized virtual database, AttackScenes, which simulates the real 3D world in a controllable and reproducible environment. Extensive experiments suggest the superiority of our proposed UPC compared with existing physical adversarial attackers not only in virtual environments (AttackScenes), but also in real-world physical environments. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Huang_Universal_Physical_Camouflage_Attacks_on_Object_Detectors_CVPR_2020_paper.pdf | http://arxiv.org/abs/1909.04326 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_Universal_Physical_Camouflage_Attacks_on_Object_Detectors_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Huang_Universal_Physical_Camouflage_Attacks_on_Object_Detectors_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Huang_Universal_Physical_Camouflage_CVPR_2020_supplemental.pdf | null | null |
Ego-Topo: Environment Affordances From Egocentric Video | Tushar Nagarajan, Yanghao Li, Christoph Feichtenhofer, Kristen Grauman | First-person video naturally brings the use of a physical environment to the forefront, since it shows the camera wearer interacting fluidly in a space based on his intentions. However, current methods largely separate the observed actions from the persistent space itself. We introduce a model for environment affordances that is learned directly from egocentric video. The main idea is to gain a human-centric model of a physical space (such as a kitchen) that captures (1) the primary spatial zones of interaction and (2) the likely activities they support. Our approach decomposes a space into a topological map derived from first-person activity, organizing an ego-video into a series of visits to the different zones. Further, we show how to link zones across multiple related environments (e.g., from videos of multiple kitchens) to obtain a consolidated representation of environment functionality. On EPIC-Kitchens and EGTEA+, we demonstrate our approach for learning scene affordances and anticipating future actions in long-form video. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Nagarajan_Ego-Topo_Environment_Affordances_From_Egocentric_Video_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Nagarajan_Ego-Topo_Environment_Affordances_From_Egocentric_Video_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Nagarajan_Ego-Topo_Environment_Affordances_From_Egocentric_Video_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Nagarajan_Ego-Topo_Environment_Affordances_CVPR_2020_supplemental.pdf | null | null |
Few-Shot Object Detection With Attention-RPN and Multi-Relation Detector | Qi Fan, Wei Zhuo, Chi-Keung Tang, Yu-Wing Tai | Conventional methods for object detection typically require a substantial amount of training data and preparing such high-quality training data is very labor-intensive. In this paper, we propose a novel few-shot object detection network that aims at detecting objects of unseen categories with only a few annotated examples. Central to our method are our Attention-RPN, Multi-Relation Detector and Contrastive Training strategy, which exploit the similarity between the few shot support set and query set to detect novel objects while suppressing false detection in the background. To train our network, we contribute a new dataset that contains 1000 categories of various objects with high-quality annotations. To the best of our knowledge, this is one of the first datasets specifically designed for few-shot object detection. Once our few-shot network is trained, it can detect objects of unseen categories without further training or fine-tuning. Our method is general and has a wide range of potential applications. We produce a new state-of-the-art performance on different datasets in the few-shot setting. The dataset link is https://github.com/fanq15/Few-Shot-Object-Detection-Dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Fan_Few-Shot_Object_Detection_With_Attention-RPN_and_Multi-Relation_Detector_CVPR_2020_paper.pdf | http://arxiv.org/abs/1908.01998 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Fan_Few-Shot_Object_Detection_With_Attention-RPN_and_Multi-Relation_Detector_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Fan_Few-Shot_Object_Detection_With_Attention-RPN_and_Multi-Relation_Detector_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Fan_Few-Shot_Object_Detection_CVPR_2020_supplemental.pdf | null | null |
Salience-Guided Cascaded Suppression Network for Person Re-Identification | Xuesong Chen, Canmiao Fu, Yong Zhao, Feng Zheng, Jingkuan Song, Rongrong Ji, Yi Yang | Employing attention mechanisms to model both global and local features as a final pedestrian representation has become a trend for person re-identification (Re-ID) algorithms. A potential limitation of these methods is that they focus on the most salient features, but the re-identification of a person may rely on diverse clues masked by the most salient features in different situations, e.g., body, clothes or even shoes. To handle this limitation, we propose a novel Salience-guided Cascaded Suppression Network (SCSN) which enables the model to mine diverse salient features and integrate these features into the final representation by a cascaded manner. Our work makes the following contributions: (i) We observe that the previously learned salient features may hinder the network from learning other important information. To tackle this limitation, we introduce a cascaded suppression strategy, which enables the network to mine diverse potential useful features that be masked by the other salient features stage-by-stage and each stage integrates different feature embedding for the last discriminative pedestrian representation. (ii) We propose a Salient Feature Extraction (SFE) unit, which can suppress the salient features learned in the previous cascaded stage and then adaptively extracts other potential salient feature to obtain different clues of pedestrians. (iii) We develop an efficient feature aggregation strategy that fully increases the network's capacity for all potential salience features. Finally, experimental results demonstrate that our proposed method outperforms the state-of-the-art methods on four large-scale datasets. Especially, our approach exceeds the current best method by over 7% on the CUHK03 dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chen_Salience-Guided_Cascaded_Suppression_Network_for_Person_Re-Identification_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_Salience-Guided_Cascaded_Suppression_Network_for_Person_Re-Identification_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_Salience-Guided_Cascaded_Suppression_Network_for_Person_Re-Identification_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Height and Uprightness Invariance for 3D Prediction From a Single View | Manel Baradad, Antonio Torralba | Current state-of-the-art methods that predict 3D from single images ignore the fact that the height of objects and their upright orientation is invariant to the camera pose and intrinsic parameters. To account for this, we propose a system that directly regresses 3D world coordinates for each pixel. First, our system predicts the camera position with respect to the ground plane and its intrinsic parameters. Followed by that, it predicts the 3D position for each pixel along the rays spanned by the camera. The predicted 3D coordinates and normals are invariant to a change in the camera position or its model, and we can directly impose a regression loss on these world coordinates. Our approach yields competitive results for depth and camera pose estimation (while not being explicitly trained to predict any of these) and improves across-dataset generalization performance over existing state-of-the-art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Baradad_Height_and_Uprightness_Invariance_for_3D_Prediction_From_a_Single_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Baradad_Height_and_Uprightness_Invariance_for_3D_Prediction_From_a_Single_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Baradad_Height_and_Uprightness_Invariance_for_3D_Prediction_From_a_Single_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Projection & Probability-Driven Black-Box Attack | Jie Li, Rongrong Ji, Hong Liu, Jianzhuang Liu, Bineng Zhong, Cheng Deng, Qi Tian | Generating adversarial examples in a black-box setting retains a significant challenge with vast practical application prospects. In particular, existing black-box attacks suffer from the need for excessive queries, as it is non-trivial to find an appropriate direction to optimize in the high-dimensional space. In this paper, we propose Projection & Probability-driven Black-box Attack (PPBA) to tackle this problem by reducing the solution space and providing better optimization. For reducing the solution space, we first model the adversarial perturbation optimization problem as a process of recovering frequency-sparse perturbations with compressed sensing, under the setting that random noise in the low-frequency space is more likely to be adversarial. We then propose a simple method to construct a low-frequency constrained sensing matrix, which works as a plug-and-play projection matrix to reduce the dimensionality. Such a sensing matrix is shown to be flexible enough to be integrated into existing methods like NES and Bandits_ TD . For better optimization, we perform a random walk with a probability-driven strategy, which utilizes all queries over the whole progress to make full use of the sensing matrix for a less query budget. Extensive experiments show that our method requires at most 24% fewer queries with a higher attack success rate compared with state-of-the-art approaches. Finally, the attack method is evaluated on the real-world online service, i.e., Google Cloud Vision API, which further demonstrates our practical potentials. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Projection__Probability-Driven_Black-Box_Attack_CVPR_2020_paper.pdf | http://arxiv.org/abs/2005.03837 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Projection__Probability-Driven_Black-Box_Attack_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Projection__Probability-Driven_Black-Box_Attack_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Deep Stereo Using Adaptive Thin Volume Representation With Uncertainty Awareness | Shuo Cheng, Zexiang Xu, Shilin Zhu, Zhuwen Li, Li Erran Li, Ravi Ramamoorthi, Hao Su | We present Uncertainty-aware Cascaded Stereo Network (UCS-Net) for 3D reconstruction from multiple RGB images. Multi-view stereo (MVS) aims to reconstruct fine-grained scene geometry from multi-view images. Previous learning-based MVS methods estimate per-view depth using plane sweep volumes (PSVs) with a fixed depth hypothesis at each plane; this requires densely sampled planes for high accuracy, which is impractical for high-resolution depth because of limited memory. In contrast, we propose adaptive thin volumes (ATVs); in an ATV, the depth hypothesis of each plane is spatially varying, which adapts to the uncertainties of previous per-pixel depth predictions. Our UCS-Net has three stages: the first stage processes a small PSV to predict low-resolution depth; two ATVs are then used in the following stages to refine the depth with higher resolution and higher accuracy. Our ATV consists of only a small number of planes with low memory and computation costs; yet, it efficiently partitions local depth ranges within learned small uncertainty intervals. We propose to use variance-based uncertainty estimates to adaptively construct ATVs; this differentiable process leads to reasonable and fine-grained spatial partitioning. Our multi-stage framework progressively sub-divides the vast scene space with increasing depth resolution and precision, which enables reconstruction with high completeness and accuracy in a coarse-to-fine fashion. We demonstrate that our method achieves superior performance compared with other learning-based MVS methods on various challenging datasets. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Cheng_Deep_Stereo_Using_Adaptive_Thin_Volume_Representation_With_Uncertainty_Awareness_CVPR_2020_paper.pdf | http://arxiv.org/abs/1911.12012 | https://www.youtube.com/watch?v=gDpOJ58RWb8 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Cheng_Deep_Stereo_Using_Adaptive_Thin_Volume_Representation_With_Uncertainty_Awareness_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Cheng_Deep_Stereo_Using_Adaptive_Thin_Volume_Representation_With_Uncertainty_Awareness_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Cheng_Deep_Stereo_Using_CVPR_2020_supplemental.pdf | null | null |
GreedyNAS: Towards Fast One-Shot NAS With Greedy Supernet | Shan You, Tao Huang, Mingmin Yang, Fei Wang, Chen Qian, Changshui Zhang | Training a supernet matters for one-shot neural architecture search (NAS) methods since it serves as a basic performance estimator for different architectures (paths). Current methods mainly hold the assumption that a supernet should give a reasonable ranking over all paths. They thus treat all paths equally, and spare much effort to train paths. However, it is harsh for a single supernet to evaluate accurately on such a huge-scale search space (e.g., 7^21). In this paper, instead of covering all paths, we ease the burden of supernet by encouraging it to focus more on evaluation of those potentially-good ones, which are identified using a surrogate portion of validation data. Concretely, during training, we propose a multi-path sampling strategy with rejection, and greedily filter the weak paths. The training efficiency is thus boosted since the training space has been greedily shrunk from all paths to those potentially-good ones. Moreover, we further adopt an exploration and exploitation policy by introducing an empirical candidate path pool. Our proposed method GreedyNAS is easy-to-follow, and experimental results on ImageNet dataset indicate that it can achieve better Top-1 accuracy under same search space and FLOPs or latency level, but with only 60% of supernet training cost. By searching on a larger space, our GreedyNAS can also obtain new state-of-the-art architectures. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/You_GreedyNAS_Towards_Fast_One-Shot_NAS_With_Greedy_Supernet_CVPR_2020_paper.pdf | http://arxiv.org/abs/2003.11236 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/You_GreedyNAS_Towards_Fast_One-Shot_NAS_With_Greedy_Supernet_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/You_GreedyNAS_Towards_Fast_One-Shot_NAS_With_Greedy_Supernet_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/You_GreedyNAS_Towards_Fast_CVPR_2020_supplemental.pdf | null | null |
Efficient Dynamic Scene Deblurring Using Spatially Variant Deconvolution Network With Optical Flow Guided Training | Yuan Yuan, Wei Su, Dandan Ma | In order to remove the non-uniform blur of images captured from dynamic scenes, many deep learning based methods design deep networks for large receptive fields and strong fitting capabilities, or use multi-scale strategy to deblur image on different scales gradually. Restricted by the fixed structures and parameters, these methods are always huge in model size to handle complex blurs. In this paper, we start from the deblurring deconvolution operation, then design an effective and real-time deblurring network. The main contributions are three folded, 1) we construct a spatially variant deconvolution network using modulated deformable convolutions, which can adjust receptive fields adaptively according to the blur features. 2) our analysis shows the sampling points of deformable convolution can be used to approximate the blur kernel, which can be simplified to bi-directional optical flows. So the position learning of sampling points can be supervised by bi-directional optical flows. 3) we build a light-weighted backbone for image restoration problem, which can balance the calculations and effectiveness well. Experimental results show that the proposed method achieves state-of-the-art deblurring performance, but with less parameters and shorter running time. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yuan_Efficient_Dynamic_Scene_Deblurring_Using_Spatially_Variant_Deconvolution_Network_With_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=l2ydmicepd8 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Yuan_Efficient_Dynamic_Scene_Deblurring_Using_Spatially_Variant_Deconvolution_Network_With_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Yuan_Efficient_Dynamic_Scene_Deblurring_Using_Spatially_Variant_Deconvolution_Network_With_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Learning 3D Semantic Scene Graphs From 3D Indoor Reconstructions | Johanna Wald, Helisa Dhamo, Nassir Navab, Federico Tombari | Scene understanding has been of high interest in computer vision. It encompasses not only identifying objects in a scene, but also their relationships within the given context. With this goal, a recent line of works tackles 3D semantic segmentation and scene layout prediction. In our work we focus on scene graphs, a data structure that organizes the entities of a scene in a graph, where objects are nodes and their relationships modeled as edges. We leverage inference on scene graphs as a way to carry out 3D scene understanding, mapping objects and their relationships. In particular, we propose a learned method that regresses a scene graph from the point cloud of a scene. Our novel architecture is based on PointNet and Graph Convolutional Networks (GCN). In addition, we introduce 3DSSG, a semiautomatically generated dataset, that contains semantically rich scene graphs of 3D scenes. We show the application of our method in a domain-agnostic retrieval task, where graphs serve as an intermediate representation for 3D-3D and 2D-3D matching. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wald_Learning_3D_Semantic_Scene_Graphs_From_3D_Indoor_Reconstructions_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.03967 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wald_Learning_3D_Semantic_Scene_Graphs_From_3D_Indoor_Reconstructions_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wald_Learning_3D_Semantic_Scene_Graphs_From_3D_Indoor_Reconstructions_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wald_Learning_3D_Semantic_CVPR_2020_supplemental.pdf | null | null |
Reliable Weighted Optimal Transport for Unsupervised Domain Adaptation | Renjun Xu, Pelen Liu, Liyan Wang, Chao Chen, Jindong Wang | Recently, extensive researches have been proposed to address the UDA problem, which aims to learn transferrable models for the unlabeled target domain. Among them, the optimal transport is a promising metric to align the representations of the source and target domains. However, most existing works based on optimal transport ignore the intra-domain structure, only achieving coarse pair-wise matching. The target samples distributed near the edge of the clusters, or far from their corresponding class centers are easily to be misclassified by the decision boundary learned from the source domain. In this paper, we present Reliable Weighted Optimal Transport (RWOT) for unsupervised domain adaptation, including novel Shrinking Subspace Reliability (SSR) and weighted optimal transport strategy. Specifically, SSR exploits spatial prototypical information and intra-domain structure to dynamically measure the sample-level domain discrepancy across domains. Besides, the weighted optimal transport strategy based on SSR is exploited to achieve the precise-pair-wise optimal transport procedure, which reduces negative transfer brought by the samples near decision boundaries in the target domain. RWOT also equips with the discriminative centroid clustering exploitation strategy to learn transfer features. A thorough evaluation shows that RWOT outperforms existing state-of-the-art method on standard domain adaptation benchmarks. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xu_Reliable_Weighted_Optimal_Transport_for_Unsupervised_Domain_Adaptation_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Reliable_Weighted_Optimal_Transport_for_Unsupervised_Domain_Adaptation_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Reliable_Weighted_Optimal_Transport_for_Unsupervised_Domain_Adaptation_CVPR_2020_paper.html | CVPR 2020 | https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Xu_Reliable_Weighted_Optimal_CVPR_2020_supplemental.pdf | null | null |
Deblurring by Realistic Blurring | Kaihao Zhang, Wenhan Luo, Yiran Zhong, Lin Ma, Bjorn Stenger, Wei Liu, Hongdong Li | Existing deep learning methods for image deblurring typically train models using pairs of sharp images and their blurred counterparts. However, synthetically blurring images does not necessarily model the blurring process in real-world scenarios with sufficient accuracy. To address this problem, we propose a new method which combines two GAN models, i.e., a learning-to-Blur GAN (BGAN) and learning-to-DeBlur GAN (DBGAN), in order to learn a better model for image deblurring by primarily learning how to blur images. The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images. In order to reduce the discrepancy between real blur and synthesized blur, a relativistic blur loss is leveraged. As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images. Our experiments show that the proposed method achieves consistently superior quantitative performance as well as higher perceptual quality on both the newly proposed dataset and the public GOPRO dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Deblurring_by_Realistic_Blurring_CVPR_2020_paper.pdf | http://arxiv.org/abs/2004.01860 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Deblurring_by_Realistic_Blurring_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Deblurring_by_Realistic_Blurring_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Geometric Structure Based and Regularized Depth Estimation From 360 Indoor Imagery | Lei Jin, Yanyu Xu, Jia Zheng, Junfei Zhang, Rui Tang, Shugong Xu, Jingyi Yu, Shenghua Gao | Motivated by the correlation between the depth and the geometric structure of a 360 indoor image, we propose a novel learning-based depth estimation framework that leverages the geometric structure of a scene to conduct depth estimation. Specifically, we represent the geometric structure of an indoor scene as a collection of corners, boundaries and planes. On the one hand, once a depth map is estimated, this geometric structure can be inferred from the estimated depth map; thus, the geometric structure functions as a regularizer for depth estimation. On the other hand, this estimation also benefits from the geometric structure of a scene estimated from an image where the structure functions as a prior. However, furniture in indoor scenes makes it challenging to infer geometric structure from depth or image data. An attention map is inferred to facilitate both depth estimation from features of the geometric structure and also geometric inferences from the estimated depth map. To validate the effectiveness of each component in our framework under controlled conditions, we render a synthetic dataset, Shanghaitech-Kujiale Indoor 360 dataset with 3550 360 indoor images. Extensive experiments on popular datasets validate the effectiveness of our solution. We also demonstrate that our method can also be applied to counterfactual depth. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jin_Geometric_Structure_Based_and_Regularized_Depth_Estimation_From_360_Indoor_CVPR_2020_paper.pdf | null | https://www.youtube.com/watch?v=4K8FyI7D2-A | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Jin_Geometric_Structure_Based_and_Regularized_Depth_Estimation_From_360_Indoor_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Jin_Geometric_Structure_Based_and_Regularized_Depth_Estimation_From_360_Indoor_CVPR_2020_paper.html | CVPR 2020 | null | https://cove.thecvf.com/datasets/334 | null |
Learning in the Frequency Domain | Kai Xu, Minghai Qin, Fei Sun, Yuhao Wang, Yen-Kuang Chen, Fengbo Ren | Deep neural networks have achieved remarkable success in computer vision tasks. Existing neural networks mainly operate in the spatial domain with fixed input sizes. For practical applications, images are usually large and have to be downsampled to the predetermined input size of neural networks. Even though the downsampling operations reduce computation and the required communication bandwidth, it removes both redundant and salient information obliviously, which results in accuracy degradation. Inspired by digital signal processing theories, we analyze the spectral bias from the frequency perspective and propose a learning-based frequency selection method to identify the trivial frequency components which can be removed without accuracy loss. The proposed method of learning in the frequency domain leverages identical structures of the well-known neural networks, such as ResNet-50, MobileNetV2, and Mask R-CNN, while accepting the frequency-domain information as the input. Experiment results show that learning in the frequency domain with static channel selection can achieve higher accuracy than the conventional spatial downsampling approach and meanwhile further reduce the input data size. Specifically for ImageNet classification with the same input size, the proposed method achieves 1.60% and 0.63% top-1 accuracy improvements on ResNet-50 and MobileNetV2, respectively. Even with half input size, the proposed method still improves the top-1 accuracy on ResNet-50 by 1.42%. In addition, we observe a 0.8% average precision improvement on Mask R-CNN for instance segmentation on the COCO dataset. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xu_Learning_in_the_Frequency_Domain_CVPR_2020_paper.pdf | http://arxiv.org/abs/2002.12416 | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Learning_in_the_Frequency_Domain_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Learning_in_the_Frequency_Domain_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
BiFuse: Monocular 360 Depth Estimation via Bi-Projection Fusion | Fu-En Wang, Yu-Hsuan Yeh, Min Sun, Wei-Chen Chiu, Yi-Hsuan Tsai | Depth estimation from a monocular 360 image is an emerging problem that gains popularity due to the availability of consumer-level 360 cameras and the complete surrounding sensing capability. While the standard of 360 imaging is under rapid development, we propose to predict the depth map of a monocular 360 image by mimicking both peripheral and foveal vision of the human eye. To this end, we adopt a two-branch neural network leveraging two common projections: equirectangular and cubemap projections. In particular, equirectangular projection incorporates a complete field-of-view but introduces distortion, whereas cubemap projection avoids distortion but introduces discontinuity at the boundary of the cube. Thus we propose a bi-projection fusion scheme along with learnable masks to balance the feature map from the two projections. Moreover, for the cubemap projection, we propose a spherical padding procedure which mitigates discontinuity at the boundary of each face. We apply our method to four panorama datasets and show favorable results against the existing state-of-the-art methods. | https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_BiFuse_Monocular_360_Depth_Estimation_via_Bi-Projection_Fusion_CVPR_2020_paper.pdf | null | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_BiFuse_Monocular_360_Depth_Estimation_via_Bi-Projection_Fusion_CVPR_2020_paper.html | https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_BiFuse_Monocular_360_Depth_Estimation_via_Bi-Projection_Fusion_CVPR_2020_paper.html | CVPR 2020 | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.