Search is not available for this dataset
title
string
authors
string
abstract
string
pdf
string
arXiv
string
video
string
bibtex
string
url
string
detail_url
string
tags
string
supp
string
dataset
string
string
Pattern-Structure Diffusion for Multi-Task Learning
Ling Zhou, Zhen Cui, Chunyan Xu, Zhenyu Zhang, Chaoqun Wang, Tong Zhang, Jian Yang
Inspired by the observation that pattern structures high-frequently recur within intra-task also across tasks, we propose a pattern-structure diffusion (PSD) framework to mine and propagate task-specific and task-across pattern structures in the task-level space for joint depth estimation, segmentation and surface normal prediction. To represent local pattern structures, we model them as small-scale graphlets, and propagate them in two different ways, i.e., intra-task and inter-task PSD. For the former, to overcome the limit of the locality of pattern structures, we use the high-order recursive aggregation on neighbors to multiplicatively increase the spread scope, so that long-distance patterns are propagated in the intra-task space. In the inter-task PSD, we mutually transfer the counterpart structures corresponding to the same spatial position into the task itself based on the matching degree of paired pattern structures therein. Finally, the intra-task and inter-task pattern structures are jointly diffused among the task-level patterns, and encapsulated into an end-to-end PSD network to boost the performance of multi-task learning. Extensive experiments on two widely-used benchmarks demonstrate that our proposed PSD is more effective and also achieves the state-of-the-art or competitive results.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhou_Pattern-Structure_Diffusion_for_Multi-Task_Learning_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Pattern-Structure_Diffusion_for_Multi-Task_Learning_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Pattern-Structure_Diffusion_for_Multi-Task_Learning_CVPR_2020_paper.html
CVPR 2020
null
null
null
On the Acceleration of Deep Learning Model Parallelism With Staleness
An Xu, Zhouyuan Huo, Heng Huang
Training the deep convolutional neural network for computer vision problems is slow and inefficient, especially when it is large and distributed across multiple devices. The inefficiency is caused by the backpropagation algorithm's forward locking, backward locking, and update locking problems. Existing solutions for acceleration either can only handle one locking problem or lead to severe accuracy loss or memory inefficiency. Moreover, none of them consider the straggler problem among devices. In this paper, we propose Layer-wise Staleness and a novel efficient training algorithm, Diversely Stale Parameters (DSP), to address these challenges. We also analyze the convergence of DSP with two popular gradient-based methods and prove that both of them are guaranteed to converge to critical points for non-convex problems. Finally, extensive experimental results on training deep learning models demonstrate that our proposed DSP algorithm can achieve significant training speedup with stronger robustness than compared methods.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xu_On_the_Acceleration_of_Deep_Learning_Model_Parallelism_With_Staleness_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_On_the_Acceleration_of_Deep_Learning_Model_Parallelism_With_Staleness_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_On_the_Acceleration_of_Deep_Learning_Model_Parallelism_With_Staleness_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Xu_On_the_Acceleration_CVPR_2020_supplemental.pdf
null
null
Self-Supervised Scene De-Occlusion
Xiaohang Zhan, Xingang Pan, Bo Dai, Ziwei Liu, Dahua Lin, Chen Change Loy
Natural scene understanding is a challenging task, particularly when encountering images of multiple objects that are partially occluded. This obstacle is given rise by varying object ordering and positioning. Existing scene understanding paradigms are able to parse only the visible parts, resulting in incomplete and unstructured scene interpretation. In this paper, we investigate the problem of scene de-occlusion, which aims to recover the underlying occlusion ordering and complete the invisible parts of occluded objects. We make the first attempt to address the problem through a novel and unified framework that recovers hidden scene structures without ordering and amodal annotations as supervisions. This is achieved via Partial Completion Network (PCNet)-mask (M) and -content (C), that learn to recover fractions of object masks and contents, respectively, in a self-supervised manner. Based on PCNet-M and PCNet-C, we devise a novel inference scheme to accomplish scene de-occlusion, via progressive ordering recovery, amodal completion and content completion. Extensive experiments on real-world scenes demonstrate the superior performance of our approach to other alternatives. Remarkably, our approach that is trained in a self-supervised manner achieves comparable results to fully-supervised methods. The proposed scene de-occlusion framework benefits many applications, including high-quality and controllable image manipulation and scene recomposition (see Fig. 1), as well as the conversion of existing modal mask annotations to amodal mask annotations.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhan_Self-Supervised_Scene_De-Occlusion_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.02788
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhan_Self-Supervised_Scene_De-Occlusion_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhan_Self-Supervised_Scene_De-Occlusion_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhan_Self-Supervised_Scene_De-Occlusion_CVPR_2020_supplemental.pdf
null
null
DeepFLASH: An Efficient Network for Learning-Based Medical Image Registration
Jian Wang, Miaomiao Zhang
This paper presents DeepFLASH, a novel network with efficient training and inference for learning-based medical image registration. In contrast to existing approaches that learn spatial transformations from training data in the high dimensional imaging space, we develop a new registration network entirely in a low dimensional bandlimited space. This dramatically reduces the computational cost and memory footprint of an expensive training and inference. To achieve this goal, we first introduce complex-valued operations and representations of neural architectures that provide key components for learning-based registration models. We then construct an explicit loss function of transformation fields fully characterized in a bandlimited space with much fewer parameterizations. Experimental results show that our method is significantly faster than the state-of-the-art deep learning based image registration methods, while producing equally accurate alignment. We demonstrate our algorithm in two different applications of image registration: 2D synthetic data and 3D real brain magnetic resonance (MR) images.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_DeepFLASH_An_Efficient_Network_for_Learning-Based_Medical_Image_Registration_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.02097
https://www.youtube.com/watch?v=QAIVcdnB0v4
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_DeepFLASH_An_Efficient_Network_for_Learning-Based_Medical_Image_Registration_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_DeepFLASH_An_Efficient_Network_for_Learning-Based_Medical_Image_Registration_CVPR_2020_paper.html
CVPR 2020
null
null
null
Structure Boundary Preserving Segmentation for Medical Image With Ambiguous Boundary
Hong Joo Lee, Jung Uk Kim, Sangmin Lee, Hak Gu Kim, Yong Man Ro
In this paper, we propose a novel image segmentation method to tackle two critical problems of medical image, which are (i) ambiguity of structure boundary in the medical image domain and (ii) uncertainty of the segmented region without specialized domain knowledge. To solve those two problems in automatic medical segmentation, we propose a novel structure boundary preserving segmentation framework. To this end, the boundary key point selection algorithm is proposed. In the proposed algorithm, the key points on the structural boundary of the target object are estimated. Then, a boundary preserving block (BPB) with the boundary key point map is applied for predicting the structure boundary of the target object. Further, for embedding experts' knowledge in the fully automatic segmentation, we propose a novel shape boundary-aware evaluator (SBE) with the ground-truth structure information indicated by experts. The proposed SBE could give feedback to the segmentation network based on the structure boundary key point. The proposed method is general and flexible enough to be built on top of any deep learning-based segmentation network. We demonstrate that the proposed method could surpass the state-of-the-art segmentation network and improve the accuracy of three different segmentation network models on different types of medical image datasets.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lee_Structure_Boundary_Preserving_Segmentation_for_Medical_Image_With_Ambiguous_Boundary_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=qF3TAxF3m1I
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Structure_Boundary_Preserving_Segmentation_for_Medical_Image_With_Ambiguous_Boundary_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Structure_Boundary_Preserving_Segmentation_for_Medical_Image_With_Ambiguous_Boundary_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Lee_Structure_Boundary_Preserving_CVPR_2020_supplemental.pdf
null
null
Residual Feature Aggregation Network for Image Super-Resolution
Jie Liu, Wenjie Zhang, Yuting Tang, Jie Tang, Gangshan Wu
Recently, very deep convolutional neural networks (CNNs) have shown great power in single image super-resolution (SISR) and achieved significant improvements against traditional methods. Among these CNN-based methods, the residual connections play a critical role in boosting the network performance. As the network depth grows, the residual features gradually focused on different aspects of the input image, which is very useful for reconstructing the spatial details. However, existing methods neglect to fully utilize the hierarchical features on the residual branches. To address this issue, we propose a novel residual feature aggregation (RFA) framework for more efficient feature extraction. The RFA framework groups several residual modules together and directly forwards the features on each local residual branch by adding skip connections. Therefore, the RFA framework is capable of aggregating these informative residual features to produce more representative features. To maximize the power of the RFA framework, we further propose an enhanced spatial attention (ESA) block to make the residual features to be more focused on critical spatial contents. The ESA block is designed to be lightweight and efficient. Our final RFANet is constructed by applying the proposed RFA framework with the ESA blocks. Comprehensive experiments demonstrate the necessity of our RFA framework and the superiority of our RFANet over state-of-the-art SISR methods.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Residual_Feature_Aggregation_Network_for_Image_Super-Resolution_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Residual_Feature_Aggregation_Network_for_Image_Super-Resolution_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Residual_Feature_Aggregation_Network_for_Image_Super-Resolution_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Liu_Residual_Feature_Aggregation_CVPR_2020_supplemental.pdf
null
null
Context-Aware and Scale-Insensitive Temporal Repetition Counting
Huaidong Zhang, Xuemiao Xu, Guoqiang Han, Shengfeng He
Temporal repetition counting aims to estimate the number of cycles of a given repetitive action. Existing deep learning methods assume repetitive actions are performed in a fixed time-scale, which is invalid for the complex repetitive actions in real life. In this paper, we tailor a context-aware and scale-insensitive framework, to tackle the challenges in repetition counting caused by the unknown and diverse cycle-lengths. Our approach combines two key insights: (1) Cycle lengths from different actions are unpredictable that require large-scale searching, but, once a coarse cycle length is determined, the variety between repetitions can be overcome by regression. (2) Determining the cycle length cannot only rely on a short fragment of video but a contextual understanding. The first point is implemented by a coarse-to-fine cycle refinement method. It avoids the heavy computation of exhaustively searching all the cycle lengths in the video, and, instead, it propagates the coarse prediction for further refinement in a hierarchical manner. We secondly propose a bidirectional cycle length estimation method for a context-aware prediction. It is a regression network that takes two consecutive coarse cycles as input, and predicts the locations of the previous and next repetitive cycles. To benefit the training and evaluation of temporal repetition counting area, we construct a new and largest benchmark, which contains 526 videos with diverse repetitive actions. Extensive experiments show that the proposed network trained on a single dataset outperforms state-of-the-art methods on several benchmarks, indicating that the proposed framework is general enough to capture repetition patterns across domains. Code and data are available in https://github.com/Xiaodomgdomg/Deep-Temporal-Repetition-Counting.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Context-Aware_and_Scale-Insensitive_Temporal_Repetition_Counting_CVPR_2020_paper.pdf
http://arxiv.org/abs/2005.08465
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Context-Aware_and_Scale-Insensitive_Temporal_Repetition_Counting_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Context-Aware_and_Scale-Insensitive_Temporal_Repetition_Counting_CVPR_2020_paper.html
CVPR 2020
null
null
null
DEPARA: Deep Attribution Graph for Deep Knowledge Transferability
Jie Song, Yixin Chen, Jingwen Ye, Xinchao Wang, Chengchao Shen, Feng Mao, Mingli Song
Exploring the intrinsic interconnections between the knowledge encoded in PRe-trained Deep Neural Networks (PR-DNNs) of heterogeneous tasks sheds light on their mutual transferability, and consequently enables knowledge transfer from one task to another so as to reduce the training effort of the latter. In this paper, we propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs. In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps with regards to the outputs of the PR-DNN. Edges denote the relatedness between inputs and are measured by the similarity of their features extracted from the PR-DNN. The knowledge transferability of two PR-DNNs is measured by the similarity of their corresponding DEPARAs. We apply DEPARA to two important yet under-studied problems in transfer learning: pre-trained model selection and layer selection. Extensive experiments are conducted to demonstrate the effectiveness and superiority of the proposed method in solving both these problems. Code, data and models reproducing the results in this paper are available at https://github.com/zju-vipa/DEPARA.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Song_DEPARA_Deep_Attribution_Graph_for_Deep_Knowledge_Transferability_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.07496
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Song_DEPARA_Deep_Attribution_Graph_for_Deep_Knowledge_Transferability_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Song_DEPARA_Deep_Attribution_Graph_for_Deep_Knowledge_Transferability_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Song_DEPARA_Deep_Attribution_CVPR_2020_supplemental.pdf
null
null
Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition
Ziyu Liu, Hongwen Zhang, Zhenghao Chen, Zhiyong Wang, Wanli Ouyang
Spatial-temporal graphs have been widely used by skeleton-based action recognition algorithms to model human action dynamics. To capture robust movement patterns from these graphs, long-range and multi-scale context aggregation and spatial-temporal dependency modeling are critical aspects of a powerful feature extractor. However, existing methods have limitations in achieving (1) unbiased long-range joint relationship modeling under multi-scale operators and (2) unobstructed cross-spacetime information flow for capturing complex spatial-temporal dependencies. In this work, we present (1) a simple method to disentangle multi-scale graph convolutions and (2) a unified spatial-temporal graph convolutional operator named G3D. The proposed multi-scale aggregation scheme disentangles the importance of nodes in different neighborhoods for effective long-range modeling. The proposed G3D module leverages dense cross-spacetime edges as skip connections for direct information propagation across the spatial-temporal graph. By coupling these proposals, we develop a powerful feature extractor named MS-G3D based on which our model outperforms previous state-of-the-art methods on three large-scale datasets: NTU RGB+D 60, NTU RGB+D 120, and Kinetics Skeleton 400.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Disentangling_and_Unifying_Graph_Convolutions_for_Skeleton-Based_Action_Recognition_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.14111
https://www.youtube.com/watch?v=fh6ys5qWqxk
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Disentangling_and_Unifying_Graph_Convolutions_for_Skeleton-Based_Action_Recognition_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Disentangling_and_Unifying_Graph_Convolutions_for_Skeleton-Based_Action_Recognition_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Liu_Disentangling_and_Unifying_CVPR_2020_supplemental.pdf
null
null
Category-Level Articulated Object Pose Estimation
Xiaolong Li, He Wang, Li Yi, Leonidas J. Guibas, A. Lynn Abbott, Shuran Song
This paper addresses the task of category-level pose estimation for articulated objects from a single depth image. We present a novel category-level approach that correctly accommodates object instances previously unseen during training. We introduce Articulation-aware Normalized Coordinate Space Hierarchy (ANCSH) - a canonical representation for different articulated objects in a given category. As the key to achieve intra-category generalization, the representation constructs a canonical object space as well as a set of canonical part spaces. The canonical object space normalizes the object orientation, scales and articulations (e.g. joint parameters and states) while each canonical part space further normalizes its part pose and scale. We develop a deep network based on PointNet++ that predicts ANCSH from a single depth point cloud, including part segmentation, normalized coordinates, and joint parameters in the canonical object space. By leveraging the canonicalized joints, we demonstrate: 1) improved performance in part pose and scale estimations using the induced kinematic constraints from joints; 2) high accuracy for joint parameter estimation in camera space.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Category-Level_Articulated_Object_Pose_Estimation_CVPR_2020_paper.pdf
http://arxiv.org/abs/1912.11913
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Category-Level_Articulated_Object_Pose_Estimation_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Category-Level_Articulated_Object_Pose_Estimation_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Category-Level_Articulated_Object_CVPR_2020_supplemental.pdf
null
null
ImVoteNet: Boosting 3D Object Detection in Point Clouds With Image Votes
Charles R. Qi, Xinlei Chen, Or Litany, Leonidas J. Guibas
3D object detection has seen quick progress thanks to advances in deep learning on point clouds. A few recent works have even shown state-of-the-art performance with just point clouds input (e.g. VoteNet). However, point cloud data have inherent limitations. They are sparse, lack color information and often suffer from sensor noise. Images, on the other hand, have high resolution and rich texture. Thus they can complement the 3D geometry provided by point clouds. Yet how to effectively use image information to assist point cloud based detection is still an open question. In this work, we build on top of VoteNet and propose a 3D detection architecture called ImVoteNet specialized for RGB-D scenes. ImVoteNet is based on fusing 2D votes in images and 3D votes in point clouds. Compared to prior work on multi-modal detection, we explicitly extract both geometric and semantic features from the 2D images. We leverage camera parameters to lift these features to 3D. To improve the synergy of 2D-3D feature fusion, we also propose a multi-tower training scheme. We validate our model on the challenging SUN RGB-D dataset, advancing state-of-the-art results by 5.7 mAP. We also provide rich ablation studies to analyze the contribution of each design choice.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Qi_ImVoteNet_Boosting_3D_Object_Detection_in_Point_Clouds_With_Image_CVPR_2020_paper.pdf
http://arxiv.org/abs/2001.10692
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Qi_ImVoteNet_Boosting_3D_Object_Detection_in_Point_Clouds_With_Image_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Qi_ImVoteNet_Boosting_3D_Object_Detection_in_Point_Clouds_With_Image_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Qi_ImVoteNet_Boosting_3D_CVPR_2020_supplemental.pdf
null
null
Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations
Sven Gowal, Chongli Qin, Po-Sen Huang, Taylan Cemgil, Krishnamurthy Dvijotham, Timothy Mann, Pushmeet Kohli
Recent research has made the surprising finding that state-of-the-art deep learning models sometimes fail to generalize to small variations of the input. Adversarial training has been shown to be an effective approach to overcome this problem. However, its application has been limited to enforcing invariance to analytically defined transformations like lp-norm bounded perturbations. Such perturbations do not necessarily cover plausible real-world variations that preserve the semantics of the input (such as a change in lighting conditions). In this paper, we propose a novel approach to express and formalize robustness to these kinds of real-world transformations of the input. The two key ideas underlying our formulation are (1) leveraging disentangled representations of the input to define different factors of variations, and (2) generating new input images by adversarially composing the representations of different images. We use a StyleGAN model to demonstrate the efficacy of this framework. Specifically, we leverage the disentangled latent representations computed by a StyleGAN model to generate perturbations of an image that are similar to real-world variations (like adding make-up, or changing the skin-tone of a person) and train models to be invariant to these perturbations. Extensive experiments show that our method improves generalization and reduces the effect of spurious correlations (reducing the error rate of a "smile" detector by 21% for example).
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Gowal_Achieving_Robustness_in_the_Wild_via_Adversarial_Mixing_With_Disentangled_CVPR_2020_paper.pdf
http://arxiv.org/abs/1912.03192
https://www.youtube.com/watch?v=4r5ekdI33OQ
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Gowal_Achieving_Robustness_in_the_Wild_via_Adversarial_Mixing_With_Disentangled_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Gowal_Achieving_Robustness_in_the_Wild_via_Adversarial_Mixing_With_Disentangled_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Gowal_Achieving_Robustness_in_CVPR_2020_supplemental.pdf
null
null
Deep Non-Line-of-Sight Reconstruction
Javier Grau Chopite, Matthias B. Hullin, Michael Wand, Julian Iseringhausen
The recent years have seen a surge of interest in methods for imaging beyond the direct line of sight. The most prominent techniques rely on time-resolved optical impulse responses, obtained by illuminating a diffuse wall with an ultrashort light pulse and observing multi-bounce indirect reflections with an ultrafast time-resolved imager. Reconstruction of geometry from such data, however, is a complex non-linear inverse problem that comes with substantial computational demands. In this paper, we employ convolutional feed-forward networks for solving the reconstruction problem efficiently while maintaining good reconstruction quality. Specifically, we devise a tailored autoencoder architecture, trained end-to-end, that maps transient images directly to a depth-map representation. Training is done using a recent, very efficient transient renderer for three-bounce indirect light transport that enables the quick generation of large amounts of training data for the network. We examine the performance of our method on a variety of synthetic and experimental datasets and its dependency on the choice of training data and augmentation strategies, as well as architectural features. We demonstrate that our feed-forward network, even if trained solely on synthetic data, is able to obtain results competitive with previous, model-based optimization methods, while being orders of magnitude faster.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chopite_Deep_Non-Line-of-Sight_Reconstruction_CVPR_2020_paper.pdf
http://arxiv.org/abs/2001.09067
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Chopite_Deep_Non-Line-of-Sight_Reconstruction_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Chopite_Deep_Non-Line-of-Sight_Reconstruction_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Chopite_Deep_Non-Line-of-Sight_Reconstruction_CVPR_2020_supplemental.pdf
null
null
Unsupervised Adaptation Learning for Hyperspectral Imagery Super-Resolution
Lei Zhang, Jiangtao Nie, Wei Wei, Yanning Zhang, Shengcai Liao, Ling Shao
The key for fusion based hyperspectral image (HSI) super-resolution (SR) is to infer the posteriori of a latent HSI using appropriate image prior and likelihood that depends on degeneration. However, in practice the priors of high-dimensional HSIs can be extremely complicated and the degeneration is often unknown. Consequently most existing approaches that assume a shallow hand-crafted image prior and a pre-defined degeneration, fail to well generalize in real applications. To tackle this problem, we present an unsupervised adaptation learning (UAL) framework. Instead of directly modelling the complicated image prior, we propose to first implicitly learn a general image prior using deep networks and then adapt it to a specific HSI. Following this idea, we develop a two-stage SR network that leverages two consecutive modules: a fusion module and an adaptation module, to recover the latent HSI in a coarse-to-fine scheme. The fusion module is pretrained in a supervised manner on synthetic data to capture a spatial-spectral prior that is general across most HSIs. To adapt the learned general prior to the specific HSI under unknown degeneration, we introduce a simple degeneration network to assist learning both the adaptation module and the degeneration in an unsupervised way. In this way, the resultant image-specific prior and the estimated degeneration can benefit the inference of a more accurate posteriori, thereby increasing generalization capacity. To verify the efficacy of UAL, we extensively evaluate it on four benchmark datasets and report strong results that surpass existing approaches.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Unsupervised_Adaptation_Learning_for_Hyperspectral_Imagery_Super-Resolution_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=VmEkrRmdnVE
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Unsupervised_Adaptation_Learning_for_Hyperspectral_Imagery_Super-Resolution_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Unsupervised_Adaptation_Learning_for_Hyperspectral_Imagery_Super-Resolution_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhang_Unsupervised_Adaptation_Learning_CVPR_2020_supplemental.pdf
null
null
Joint Demosaicing and Denoising With Self Guidance
Lin Liu, Xu Jia, Jianzhuang Liu, Qi Tian
Usually located at the very early stages of the computational photography pipeline, demosaicing and denoising play important parts in the modern camera image processing. Recently, some neural networks have shown the effectiveness in joint demosaicing and denoising (JDD). Most of them first decompose a Bayer raw image into a four-channel RGGB image and then feed it into a neural network. This practice ignores the fact that the green channels are sampled at a double rate compared to the red and the blue channels. In this paper, we propose a self-guidance network (SGNet), where the green channels are initially estimated and then works as a guidance to recover all missing values in the input image. In addition, as regions of different frequencies suffer different levels of degradation in image restoration. We propose a density-map guidance to help the model deal with a wide range of frequencies. Our model outperforms state-of-the-art joint demosaicing and denoising methods on four public datasets, including two real and two synthetic data sets. Finally, we also verify that our method obtains best results in joint demosaicing , denoising and super-resolution.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Joint_Demosaicing_and_Denoising_With_Self_Guidance_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Joint_Demosaicing_and_Denoising_With_Self_Guidance_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Joint_Demosaicing_and_Denoising_With_Self_Guidance_CVPR_2020_paper.html
CVPR 2020
null
null
null
SpSequenceNet: Semantic Segmentation Network on 4D Point Clouds
Hanyu Shi, Guosheng Lin, Hao Wang, Tzu-Yi Hung, Zhenhua Wang
Point clouds are useful in many applications like autonomous driving and robotics as they provide natural 3D information of the surrounding environments. While there are extensive research on 3D point clouds, scene understanding on 4D point clouds, a series of consecutive 3D point clouds frames, is an emerging topic and yet under-investigated. With 4D point clouds (3D point cloud videos), robotic systems could enhance their robustness by leveraging the temporal information from previous frames. However, the existing semantic segmentation methods on 4D point clouds suffer from low precision due to the spatial and temporal information loss in their network structures. In this paper, we propose SpSequenceNet to address this problem. The network is designed based on 3D sparse convolution. And we introduce two novel modules, a cross-frame global attention module and a cross-frame local interpolation module, to capture spatial and temporal information in 4D point clouds. We conduct extensive experiments on SemanticKITTI, and achieve the state-of-the-art result of 43.1% on mIoU, which is 1.5% higher than the previous best approach.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Shi_SpSequenceNet_Semantic_Segmentation_Network_on_4D_Point_Clouds_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Shi_SpSequenceNet_Semantic_Segmentation_Network_on_4D_Point_Clouds_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Shi_SpSequenceNet_Semantic_Segmentation_Network_on_4D_Point_Clouds_CVPR_2020_paper.html
CVPR 2020
null
null
null
Shoestring: Graph-Based Semi-Supervised Classification With Severely Limited Labeled Data
Wanyu Lin, Zhaolin Gao, Baochun Li
Graph-based semi-supervised learning has been shown to be one of the most effective classification approaches, as it can exploit connectivity patterns between labeled and unlabeled samples to improve learning performance. However, we show that existing techniques perform poorly when labeled data are severely limited. To address the problem of semi-supervised learning in the presence of severely limited labeled samples, we propose a new framework, called Shoestring , that incorporates metric learning into the paradigm of graph-based semi-supervised learning. In particular, our base model consists of a graph embedding network, followed by a metric learning network that learns a semantic metric space to represent the semantic similarity between the sparsely labeled and large numbers of unlabeled samples. Then the classification can be performed by clustering the unlabeled samples according to the learned semantic space. We empirically demonstrate Shoestring's superiority over many baselines, including graph convolutional networks, label propagation and their recent label-efficient variations (IGCN and GLP). We show that our framework achieves state-of-the-art performance for node classification in the low-data regime. In addition, we demonstrate the effectiveness of our framework on image classification tasks in the few-shot learning regime, with significant gains on miniImageNet (2.57%~3.59%) and tieredImageNet (1.05%~2.70%).
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lin_Shoestring_Graph-Based_Semi-Supervised_Classification_With_Severely_Limited_Labeled_Data_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=gwy92d6IO1Q
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Shoestring_Graph-Based_Semi-Supervised_Classification_With_Severely_Limited_Labeled_Data_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Shoestring_Graph-Based_Semi-Supervised_Classification_With_Severely_Limited_Labeled_Data_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Lin_Shoestring_Graph-Based_Semi-Supervised_CVPR_2020_supplemental.pdf
null
null
Distilling Image Dehazing With Heterogeneous Task Imitation
Ming Hong, Yuan Xie, Cuihua Li, Yanyun Qu
State-of-the-art deep dehazing models are often difficult in training. Knowledge distillation paves a way to train a student network assisted by a teacher network. However, most knowledge distill methods are used for image classification and segmentation as well as object detection, and few investigate distilling image restoration and use different task for knowledge transfer. In this paper, we propose a knowledge-distill dehazing network which distills image dehazing with the heterogeneous task imitation. In our network, the teacher is an off-the-shelf auto-encoder network and is used for image reconstruction. The dehazing network is trained assisted by the teacher network with the process-oriented learning mechanism. The student network imitates the task of image reconstruction in the teacher network. Moreover, we design a spatial-weighted channel-attention residual block for the student image dehazing network to adaptively learn the content-aware channel level attention and pay more attention to the features for dense hazy regions reconstruction. To evaluate the effectiveness of the proposed method, we compare our method with several state-of-the-art methods on two synthetic and real-world datasets, as well as real hazy images.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hong_Distilling_Image_Dehazing_With_Heterogeneous_Task_Imitation_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Hong_Distilling_Image_Dehazing_With_Heterogeneous_Task_Imitation_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Hong_Distilling_Image_Dehazing_With_Heterogeneous_Task_Imitation_CVPR_2020_paper.html
CVPR 2020
null
null
null
Photometric Stereo via Discrete Hypothesis-and-Test Search
Kenji Enomoto, Michael Waechter, Kiriakos N. Kutulakos, Yasuyuki Matsushita
In this paper, we consider the problem of estimating surface normals of a scene with spatially varying, general BRDFs observed by a static camera under varying, known, distant illumination. Unlike previous approaches that are mostly based on continuous local optimization, we cast the problem as a discrete hypothesis-and-test search problem over the discretized space of surface normals. While a naive search requires a significant amount of time, we show that the expensive computation block can be precomputed in a scene-independent manner, resulting in accelerated inference for new scenes. It allows us to perform a full search over the finely discretized space of surface normals to determine the globally optimal surface normal for each scene point. We show that our method can accurately estimate surface normals of scenes with spatially varying different reflectances in a reasonable amount of time.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Enomoto_Photometric_Stereo_via_Discrete_Hypothesis-and-Test_Search_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Enomoto_Photometric_Stereo_via_Discrete_Hypothesis-and-Test_Search_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Enomoto_Photometric_Stereo_via_Discrete_Hypothesis-and-Test_Search_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Enomoto_Photometric_Stereo_via_CVPR_2020_supplemental.pdf
null
null
Task Agnostic Robust Learning on Corrupt Outputs by Correlation-Guided Mixture Density Networks
Sungjoon Choi, Sanghoon Hong, Kyungjae Lee, Sungbin Lim
In this paper, we focus on weakly supervised learning with noisy training data for both classification and regression problems. We assume that the training outputs are collected from a mixture of a target and correlated noise distributions. Our proposed method simultaneously estimates the target distribution and the quality of each data which is defined as the correlation between the target and data generating distributions. The cornerstone of the proposed method is a Cholesky Block that enables modeling dependencies among mixture distributions in a differentiable manner where we maintain the distribution over the network weights. We first provide illustrative examples in both regression and classification tasks to show the effectiveness of the proposed method. Then, the proposed method is extensively evaluated in a number of experiments where we show that it constantly shows comparable or superior performances compared to existing baseline methods in the handling of noisy data.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Choi_Task_Agnostic_Robust_Learning_on_Corrupt_Outputs_by_Correlation-Guided_Mixture_CVPR_2020_paper.pdf
http://arxiv.org/abs/1805.06431
https://www.youtube.com/watch?v=RvFKk3I3nwk
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Choi_Task_Agnostic_Robust_Learning_on_Corrupt_Outputs_by_Correlation-Guided_Mixture_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Choi_Task_Agnostic_Robust_Learning_on_Corrupt_Outputs_by_Correlation-Guided_Mixture_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Choi_Task_Agnostic_Robust_CVPR_2020_supplemental.pdf
null
null
Multi-Path Region Mining for Weakly Supervised 3D Semantic Segmentation on Point Clouds
Jiacheng Wei, Guosheng Lin, Kim-Hui Yap, Tzu-Yi Hung, Lihua Xie
Point clouds provide intrinsic geometric information and surface context for scene understanding. Existing methods for point cloud segmentation require a large amount of fully labeled data. Using advanced depth sensors, collection of large scale 3D dataset is no longer a cumbersome process. However, manually producing point-level label on the large scale dataset is time and labor-intensive. In this paper, we propose a weakly supervised approach to predict point-level results using weak labels on 3D point clouds. We introduce our multi-path region mining module to generate pseudo point-level labels from a classification network trained with weak labels. It mines the localization cues for each class from various aspects of the network feature using different attention modules. Then, we use the point-level pseudo label to train a point cloud segmentation network in a fully supervised manner. To the best of our knowledge, this is the first method that uses cloud-level weak labels on raw 3D space to train a point cloud semantic segmentation network. In our setting, the 3D weak labels only indicate the classes that appeared in our input sample. We discuss both scene- and subcloud-level weakly labels on raw 3D point cloud data and perform in-depth experiments on them. On ScanNet dataset, our result trained with subcloud-level labels is compatible with some fully supervised methods.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wei_Multi-Path_Region_Mining_for_Weakly_Supervised_3D_Semantic_Segmentation_on_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.13035
https://www.youtube.com/watch?v=V87_y6dPyRQ
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wei_Multi-Path_Region_Mining_for_Weakly_Supervised_3D_Semantic_Segmentation_on_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wei_Multi-Path_Region_Mining_for_Weakly_Supervised_3D_Semantic_Segmentation_on_CVPR_2020_paper.html
CVPR 2020
null
null
null
Single-Step Adversarial Training With Dropout Scheduling
Vivek B.S., R. Venkatesh Babu
Deep learning models have shown impressive performance across a spectrum of computer vision applications including medical diagnosis and autonomous driving. One of the major concerns that these models face is their susceptibility to adversarial attacks. Realizing the importance of this issue, more researchers are working towards developing robust models that are less affected by adversarial attacks. Adversarial training method shows promising results in this direction. In adversarial training regime, models are trained with mini-batches augmented with adversarial samples. Fast and simple methods (e.g., single-step gradient ascent) are used for generating adversarial samples, in order to reduce computational complexity. It is shown that models trained using single-step adversarial training method (adversarial samples are generated using non-iterative method) are pseudo robust. Further, this pseudo robustness of models is attributed to the gradient masking effect. However, existing works fail to explain when and why gradient masking effect occurs during single-step adversarial training. In this work, (i) we show that models trained using single-step adversarial training method learn to prevent the generation of single-step adversaries, and this is due to over-fitting of the model during the initial stages of training, and (ii) to mitigate this effect, we propose a single-step adversarial training method with dropout scheduling. Unlike models trained using existing single-step adversarial training methods, models trained using the proposed single-step adversarial training method are robust against both single-step and multi-step adversarial attacks, and the performance is on par with models trained using computationally expensive multi-step adversarial training methods, in white-box and black-box settings.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/B.S._Single-Step_Adversarial_Training_With_Dropout_Scheduling_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/B.S._Single-Step_Adversarial_Training_With_Dropout_Scheduling_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/B.S._Single-Step_Adversarial_Training_With_Dropout_Scheduling_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/B.S._Single-Step_Adversarial_Training_CVPR_2020_supplemental.pdf
null
null
Online Depth Learning Against Forgetting in Monocular Videos
Zhenyu Zhang, Stephane Lathuiliere, Elisa Ricci, Nicu Sebe, Yan Yan, Jian Yang
Online depth learning is the problem of consistently adapting a depth estimation model to handle a continuously changing environment. This problem is challenging due to the network easily overfits on the current environment and forgets its past experiences. To address such problem, this paper presents a novel Learning to Prevent Forgetting (LPF) method for online mono-depth adaptation to new target domains in unsupervised manner. Instead of updating the universal parameters, LPF learns adapter modules to efficiently adjust the feature representation and distribution without losing the pre-learned knowledge in online condition. Specifically, to adapt temporal-continuous depth patterns in videos, we introduce a novel meta-learning approach to learn adapter modules by combining online adaptation process into the learning objective. To further avoid overfitting, we propose a novel temporal-consistent regularization to harmonize the gradient descent procedure at each online learning step. Extensive evaluations on real-world datasets demonstrate that the proposed method, with very limited parameters, significantly improves the estimation quality.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Online_Depth_Learning_Against_Forgetting_in_Monocular_Videos_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Online_Depth_Learning_Against_Forgetting_in_Monocular_Videos_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Online_Depth_Learning_Against_Forgetting_in_Monocular_Videos_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhang_Online_Depth_Learning_CVPR_2020_supplemental.pdf
null
null
Neuromorphic Camera Guided High Dynamic Range Imaging
Jin Han, Chu Zhou, Peiqi Duan, Yehui Tang, Chang Xu, Chao Xu, Tiejun Huang, Boxin Shi
Reconstruction of high dynamic range image from a single low dynamic range image captured by a frame-based conventional camera, which suffers from over- or under-exposure, is an ill-posed problem. In contrast, recent neuromorphic cameras are able to record high dynamic range scenes in the form of an intensity map, with much lower spatial resolution, and without color. In this paper, we propose a neuromorphic camera guided high dynamic range imaging pipeline, and a network consisting of specially designed modules according to each step in the pipeline, which bridges the domain gaps on resolution, dynamic range, and color representation between two types of sensors and images. A hybrid camera system has been built to validate that the proposed method is able to reconstruct quantitatively and qualitatively high-quality high dynamic range images by successfully fusing the images and intensity maps for various real-world scenarios.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Han_Neuromorphic_Camera_Guided_High_Dynamic_Range_Imaging_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Han_Neuromorphic_Camera_Guided_High_Dynamic_Range_Imaging_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Han_Neuromorphic_Camera_Guided_High_Dynamic_Range_Imaging_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Han_Neuromorphic_Camera_Guided_CVPR_2020_supplemental.pdf
null
null
Reconstruct Locally, Localize Globally: A Model Free Method for Object Pose Estimation
Ming Cai, Ian Reid
Six degree-of-freedom pose estimation of a known object in a single image is a long-standing computer vision objective. It is classically posed as a correspondence problem between a known geometric model, such as a CAD model, and image locations. If a CAD model is not available, it is possible to use multi-view visual reconstruction methods to create a geometric model, and use this in the same manner. Instead, we propose a learning-based method whose input is a collection of images of a target object, and whose output is the pose of the object in a novel view. At inference time, our method maps from the RoI features of the input image to a dense collection of object-centric 3D coordinates, one per pixel. This dense 2D-3D mapping is then used to determine 6dof pose using standard PnP plus RANSAC. The model that maps 2D to object 3D coordinates is established at training time by automatically discovering and matching image landmarks that are consistent across multiple views. We show that this method eliminates the requirement for a 3D CAD model (needed by classical geometry-based methods and state-of-the-art learning-based methods alike) but still achieves performance on a par with the prior art.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Cai_Reconstruct_Locally_Localize_Globally_A_Model_Free_Method_for_Object_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=Gf7bTrNIcqw
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Cai_Reconstruct_Locally_Localize_Globally_A_Model_Free_Method_for_Object_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Cai_Reconstruct_Locally_Localize_Globally_A_Model_Free_Method_for_Object_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Cai_Reconstruct_Locally_Localize_CVPR_2020_supplemental.pdf
null
null
Deep Learning for Handling Kernel/model Uncertainty in Image Deconvolution
Yuesong Nan, Hui Ji
Most existing non-blind image deconvolution methods assume that the given blurring kernel is error-free. In practice, blurring kernel often is estimated via some blind deblurring algorithm which is not exactly the truth. Also, the convolution model is only an approximation to practical blurring effect. It is known that non-blind deconvolution is susceptible to such a kernel/model error. Based on an error-in-variable (EIV) model of image blurring that takes kernel error into consideration, this paper presents a deep learning method for deconvolution, which unrolls a total-least-squares (TLS) estimator whose relating priors are learned by neural networks (NNs). The experiments showed that the proposed method is robust to kernel/model error. It noticeably outperformed existing solutions when deblurring images using noisy kernels, e.g. the ones estimated from existing blind motion deblurring methods.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Nan_Deep_Learning_for_Handling_Kernelmodel_Uncertainty_in_Image_Deconvolution_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Nan_Deep_Learning_for_Handling_Kernelmodel_Uncertainty_in_Image_Deconvolution_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Nan_Deep_Learning_for_Handling_Kernelmodel_Uncertainty_in_Image_Deconvolution_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Nan_Deep_Learning_for_CVPR_2020_supplemental.pdf
null
null
VPLNet: Deep Single View Normal Estimation With Vanishing Points and Lines
Rui Wang, David Geraghty, Kevin Matzen, Richard Szeliski, Jan-Michael Frahm
We present a novel single-view surface normal estimation method that combines traditional line and vanishing point analysis with a deep learning approach. Starting from a color image and a Manhattan line map, we use a deep neural network to regress on a dense normal map, and a dense Manhattan label map that identifies planar regions aligned with the Manhattan directions. We fuse the normal map and label map in a fully differentiable manner to produce a refined normal map as final output. To do so, we softly decompose the output into a Manhattan part and a non-Manhattan part. The Manhattan part is treated by discrete classification and vanishing points, while the non-Manhattan part is learned by direct supervision. Our method achieves state-of-the-art results on standard single-view normal estimation benchmarks. More importantly, we show that by using vanishing points and lines, our method has better generalization ability than existing works. In addition, we demonstrate how our surface normal network can improve the performance of depth estimation networks, both quantitatively and qualitatively, in particular, in 3D reconstructions of walls and other flat surfaces.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_VPLNet_Deep_Single_View_Normal_Estimation_With_Vanishing_Points_and_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_VPLNet_Deep_Single_View_Normal_Estimation_With_Vanishing_Points_and_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_VPLNet_Deep_Single_View_Normal_Estimation_With_Vanishing_Points_and_CVPR_2020_paper.html
CVPR 2020
null
null
null
Intra- and Inter-Action Understanding via Temporal Action Parsing
Dian Shao, Yue Zhao, Bo Dai, Dahua Lin
Current methods for action recognition primarily rely on deep convolutional networks to derive feature embeddings of visual and motion features. While these methods have demonstrated remarkable performance on standard benchmarks, we are still in need of a better understanding as to how the videos, in particular their internal structures, relate to high-level semantics, which may lead to benefits in multiple aspects, e.g. interpretable predictions and even new methods that can take the recognition performances to a next level. Towards this goal, we construct TAPOS, a new dataset developed on sport videos with manual annotations of sub-actions, and conduct a study on temporal action parsing on top. Our study shows that a sport activity usually consists of multiple sub-actions and that the awareness of such temporal structures is beneficial to action recognition. We also investigate a number of temporal parsing methods, and thereon devise an improved method that is capable of mining sub-actions from training data without knowing the labels of them. On the constructed TAPOS, the proposed method is shown to reveal intra-action information, i.e. how action instances are made of sub-actions, and inter-action information, i.e. one specific sub-action may commonly appear in various actions.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Shao_Intra-_and_Inter-Action_Understanding_via_Temporal_Action_Parsing_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Shao_Intra-_and_Inter-Action_Understanding_via_Temporal_Action_Parsing_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Shao_Intra-_and_Inter-Action_Understanding_via_Temporal_Action_Parsing_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Shao_Intra-_and_Inter-Action_CVPR_2020_supplemental.zip
null
null
Graph-Guided Architecture Search for Real-Time Semantic Segmentation
Peiwen Lin, Peng Sun, Guangliang Cheng, Sirui Xie, Xi Li, Jianping Shi
Designing a lightweight semantic segmentation network often requires researchers to find a trade-off between performance and speed, which is always empirical due to the limited interpretability of neural networks. In order to release researchers from these tedious mechanical trials, we propose a Graph-guided Architecture Search (GAS) pipeline to automatically search real-time semantic segmentation networks. Unlike previous works that use a simplified search space and stack a repeatable cell to form a network, we introduce a novel search mechanism with a new search space where a lightweight model can be effectively explored through the cell-level diversity and latency oriented constraint. Specifically, to produce the cell-level diversity, the cell-sharing constraint is eliminated through the cell-independent manner. Then a graph convolution network (GCN) is seamlessly integrated as a communication mechanism between cells. Finally, a latency-oriented constraint is endowed into the search process to balance the speed and performance. Extensive experiments on Cityscapes and CamVid datasets demonstrate that GAS achieves the new state-of-the-art trade-off between accuracy and speed. In particular, on Cityscapes dataset, GAS achieves the new best performance of 73.5% mIoU with the speed of 108.4 FPS on Titan Xp.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lin_Graph-Guided_Architecture_Search_for_Real-Time_Semantic_Segmentation_CVPR_2020_paper.pdf
http://arxiv.org/abs/1909.06793
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Graph-Guided_Architecture_Search_for_Real-Time_Semantic_Segmentation_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_Graph-Guided_Architecture_Search_for_Real-Time_Semantic_Segmentation_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Lin_Graph-Guided_Architecture_Search_CVPR_2020_supplemental.pdf
null
null
Polarized Reflection Removal With Perfect Alignment in the Wild
Chenyang Lei, Xuhua Huang, Mengdi Zhang, Qiong Yan, Wenxiu Sun, Qifeng Chen
We present a novel formulation to removing reflection from polarized images in the wild. We first identify the misalignment issues of existing reflection removal datasets where the collected reflection-free images are not perfectly aligned with input mixed images due to glass refraction. Then we build a new dataset with more than 100 types of glass in which obtained transmission images are perfectly aligned with input mixed images. Second, capitalizing on the special relationship between reflection and polarized light, we propose a polarized reflection removal model with a two-stage architecture. In addition, we design a novel perceptual NCC loss that can improve the performance of reflection removal and general image decomposition tasks. We conduct extensive experiments, and results suggest that our model outperforms state-of-the-art methods on reflection removal.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lei_Polarized_Reflection_Removal_With_Perfect_Alignment_in_the_Wild_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.12789
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Lei_Polarized_Reflection_Removal_With_Perfect_Alignment_in_the_Wild_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Lei_Polarized_Reflection_Removal_With_Perfect_Alignment_in_the_Wild_CVPR_2020_paper.html
CVPR 2020
null
null
null
Blur Aware Calibration of Multi-Focus Plenoptic Camera
Mathieu Labussiere, Celine Teuliere, Frederic Bernardin, Omar Ait-Aider
This paper presents a novel calibration algorithm for Multi-Focus Plenoptic Cameras (MFPCs) using raw images only. The design of such cameras is usually complex and relies on precise placement of optic elements. Several calibration procedures have been proposed to retrieve the camera parameters but relying on simplified models, reconstructed images to extract features, or multiple calibrations when several types of micro-lens are used. Considering blur information, we propose a new Blur Aware Plenoptic (BAP) feature. It is first exploited in a pre-calibration step that retrieves initial camera parameters, and secondly to express a new cost function for our single optimization process. The effectiveness of our calibration method is validated by quantitative and qualitative experiments.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Labussiere_Blur_Aware_Calibration_of_Multi-Focus_Plenoptic_Camera_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.07745
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Labussiere_Blur_Aware_Calibration_of_Multi-Focus_Plenoptic_Camera_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Labussiere_Blur_Aware_Calibration_of_Multi-Focus_Plenoptic_Camera_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Labussiere_Blur_Aware_Calibration_CVPR_2020_supplemental.pdf
null
null
Single-Shot Monocular RGB-D Imaging Using Uneven Double Refraction
Andreas Meuleman, Seung-Hwan Baek, Felix Heide, Min H. Kim
Cameras that capture color and depth information have become an essential imaging modality for applications in robotics, autonomous driving, virtual, and augmented reality. Existing RGB-D cameras rely on multiple sensors or active illumination with specialized sensors. In this work, we propose a method for monocular single-shot RGB-D imaging. Instead of learning depth from single-image depth cues, we revisit double-refraction imaging using a birefractive medium, measuring depth as the displacement of differently refracted images superimposed in a single capture. However, existing double-refraction methods are orders of magnitudes too slow to be used in real-time applications, e.g., in robotics, and provide only inaccurate depth due to correspondence ambiguity in double reflection. We resolve this ambiguity optically by leveraging the orthogonality of the two linearly polarized rays in double refraction -- introducing uneven double refraction by adding a linear polarizer to the birefractive medium. Doing so makes it possible to develop a real-time method for reconstructing sparse depth and color simultaneously in real-time. We validate the proposed method, both synthetically and experimentally, and demonstrate 3D object detection and photographic applications.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Meuleman_Single-Shot_Monocular_RGB-D_Imaging_Using_Uneven_Double_Refraction_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Meuleman_Single-Shot_Monocular_RGB-D_Imaging_Using_Uneven_Double_Refraction_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Meuleman_Single-Shot_Monocular_RGB-D_Imaging_Using_Uneven_Double_Refraction_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Meuleman_Single-Shot_Monocular_RGB-D_CVPR_2020_supplemental.zip
null
null
Uninformed Students: Student-Teacher Anomaly Detection With Discriminative Latent Embeddings
Paul Bergmann, Michael Fauser, David Sattlegger, Carsten Steger
We introduce a powerful student-teacher framework for the challenging problem of unsupervised anomaly detection and pixel-precise anomaly segmentation in high-resolution images. Student networks are trained to regress the output of a descriptive teacher network that was pretrained on a large dataset of patches from natural images. This circumvents the need for prior data annotation. Anomalies are detected when the outputs of the student networks differ from that of the teacher network. This happens when they fail to generalize outside the manifold of anomaly-free training data. The intrinsic uncertainty in the student networks is used as an additional scoring function that indicates anomalies. We compare our method to a large number of existing deep learning based methods for unsupervised anomaly detection. Our experiments demonstrate improvements over state-of-the-art methods on a number of real-world datasets, including the recently introduced MVTec Anomaly Detection dataset that was specifically designed to benchmark anomaly segmentation algorithms.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Bergmann_Uninformed_Students_Student-Teacher_Anomaly_Detection_With_Discriminative_Latent_Embeddings_CVPR_2020_paper.pdf
http://arxiv.org/abs/1911.02357
https://www.youtube.com/watch?v=pozNR-dgzOM
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Bergmann_Uninformed_Students_Student-Teacher_Anomaly_Detection_With_Discriminative_Latent_Embeddings_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Bergmann_Uninformed_Students_Student-Teacher_Anomaly_Detection_With_Discriminative_Latent_Embeddings_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Bergmann_Uninformed_Students_Student-Teacher_CVPR_2020_supplemental.pdf
null
null
Learning to Restore Low-Light Images via Decomposition-and-Enhancement
Ke Xu, Xin Yang, Baocai Yin, Rynson W.H. Lau
Low-light images typically suffer from two problems. First, they have low visibility (i.e., small pixel values). Second, noise becomes significant and disrupts the image content, due to low signal-to-noise ratio. Most existing lowlight image enhancement methods, however, learn from noise-negligible datasets. They rely on users having good photographic skills in taking images with low noise. Unfortunately, this is not the case for majority of the low-light images. While concurrently enhancing a low-light image and removing its noise is ill-posed, we observe that noise exhibits different levels of contrast in different frequency layers, and it is much easier to detect noise in the lowfrequency layer than in the high one. Inspired by this observation, we propose a frequency-based decompositionand- enhancement model for low-light image enhancement. Based on this model, we present a novel network that first learns to recover image objects in the low-frequency layer and then enhances high-frequency details based on the recovered image objects. In addition, we have prepared a new low-light image dataset with real noise to facilitate learning. Finally, we have conducted extensive experiments to show that the proposed method outperforms state-of-the-art approaches in enhancing practical noisy low-light images.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xu_Learning_to_Restore_Low-Light_Images_via_Decomposition-and-Enhancement_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Learning_to_Restore_Low-Light_Images_via_Decomposition-and-Enhancement_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_Learning_to_Restore_Low-Light_Images_via_Decomposition-and-Enhancement_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Xu_Learning_to_Restore_CVPR_2020_supplemental.pdf
null
null
HRank: Filter Pruning Using High-Rank Feature Map
Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, Ling Shao
Neural network pruning offers a promising prospect to facilitate deploying deep neural networks on resource-limited devices. However, existing methods are still challenged by the training inefficiency and labor cost in pruning designs, due to missing theoretical guidance of non-salient network components. In this paper, we propose a novel filter pruning method by exploring the High Rank of feature maps (HRank). Our HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Based on HRank, we develop a method that is mathematically formulated to prune filters with low-rank feature maps. The principle behind our pruning is that low-rank feature maps contain less information, and thus pruned results can be easily reproduced. Besides, we experimentally show that weights with high-rank feature maps contain more important information, such that even when a portion is not updated, very little damage would be done to the model performance. Without introducing any additional constraints, HRank leads to significant improvements over the state-of-the-arts in terms of FLOPs and parameters reduction, with similar accuracies. For example, with ResNet-110, we achieve a 58.2%-FLOPs reduction by removing 59.2% of the parameters, with only a small loss of 0.14% in top-1 accuracy on CIFAR-10. With Res-50, we achieve a 43.8%-FLOPs reduction by removing 36.7% of the parameters, with only a loss of 1.17% in the top-1 accuracy on ImageNet. The codes can be available at https://github.com/lmbxmu/HRank.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lin_HRank_Filter_Pruning_Using_High-Rank_Feature_Map_CVPR_2020_paper.pdf
http://arxiv.org/abs/2002.10179
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_HRank_Filter_Pruning_Using_High-Rank_Feature_Map_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Lin_HRank_Filter_Pruning_Using_High-Rank_Feature_Map_CVPR_2020_paper.html
CVPR 2020
null
null
null
AANet: Adaptive Aggregation Network for Efficient Stereo Matching
Haofei Xu, Juyong Zhang
Despite the remarkable progress made by learning based stereo matching algorithms, one key challenge remains unsolved. Current state-of-the-art stereo models are mostly based on costly 3D convolutions, the cubic computational complexity and high memory consumption make it quite expensive to deploy in real-world applications. In this paper, we aim at completely replacing the commonly used 3D convolutions to achieve fast inference speed while maintaining comparable accuracy. To this end, we first propose a sparse points based intra-scale cost aggregation method to alleviate the well-known edge-fattening issue at disparity discontinuities. Further, we approximate traditional cross-scale cost aggregation algorithm with neural network layers to handle large textureless regions. Both modules are simple, lightweight, and complementary, leading to an effective and efficient architecture for cost aggregation. With these two modules, we can not only significantly speed up existing top-performing models (e.g., 41x than GC-Net, 4x than PSMNet and 38x than GA-Net), but also improve the performance of fast stereo models (e.g., StereoNet). We also achieve competitive results on Scene Flow and KITTI datasets while running at 62ms, demonstrating the versatility and high efficiency of the proposed method. Our full framework is available at https://github.com/haofeixu/aanet.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Xu_AANet_Adaptive_Aggregation_Network_for_Efficient_Stereo_Matching_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.09548
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_AANet_Adaptive_Aggregation_Network_for_Efficient_Stereo_Matching_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_AANet_Adaptive_Aggregation_Network_for_Efficient_Stereo_Matching_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Xu_AANet_Adaptive_Aggregation_CVPR_2020_supplemental.pdf
null
null
Unbiased Scene Graph Generation From Biased Training
Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, Hanwang Zhang
Today's scene graph generation (SGG) task is still far from practical, mainly due to the severe training bias, e.g., collapsing diverse "human walk on / sit on / lay on beach" into "human on beach". Given such SGG, the down-stream tasks such as VQA can hardly infer better scene structures than merely a bag of objects. However, debiasing in SGG is not trivial because traditional debiasing methods cannot distinguish between the good and bad bias, e.g., good context prior (e.g., "person read book" rather than "eat") and bad long-tailed bias (e.g., "near" dominating "behind / in front of"). In this paper, we present a novel SGG framework based on causal inference but not the conventional likelihood. We first build a causal graph for SGG, and perform traditional biased training with the graph. Then, we propose to draw the counterfactual causality from the trained graph to infer the effect from the bad bias, which should be removed. In particular, we use Total Direct Effect (TDE) as the proposed final predicate score for unbiased SGG. Note that our framework is agnostic to any SGG model and thus can be widely applied in the community who seeks unbiased predictions. By using the proposed Scene Graph Diagnosis toolkit on the SGG benchmark Visual Genome and several prevailing models, we observed significant improvements over the previous state-of-the-art methods.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Tang_Unbiased_Scene_Graph_Generation_From_Biased_Training_CVPR_2020_paper.pdf
http://arxiv.org/abs/2002.11949
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Tang_Unbiased_Scene_Graph_Generation_From_Biased_Training_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Tang_Unbiased_Scene_Graph_Generation_From_Biased_Training_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Tang_Unbiased_Scene_Graph_CVPR_2020_supplemental.pdf
null
null
A Semi-Supervised Assessor of Neural Architectures
Yehui Tang, Yunhe Wang, Yixing Xu, Hanting Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, Chang Xu
Neural architecture search (NAS) aims to automatically design deep neural networks of satisfactory performance. Wherein, architecture performance predictor is critical to efficiently value an intermediate neural architecture. But for the training of this predictor, a number of neural architectures and their corresponding real performance often have to be collected. In contrast with classical performance predictor optimized in a fully supervised way, this paper suggests a semi-supervised assessor of neural architectures. We employ an auto-encoder to discover meaningful representations of neural architectures. Taking each neural architecture as an individual instance in the search space, we construct a graph to capture their intrinsic similarities, where both labeled and unlabeled architectures are involved. A graph convolutional neural network is introduced to predict the performance of architectures based on the learned representations and their relation modeled by the graph. Extensive experimental results on the NAS-Benchmark-101 dataset demonstrated that our method is able to make a significant reduction on the required fully trained architectures for finding efficient architectures.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Tang_A_Semi-Supervised_Assessor_of_Neural_Architectures_CVPR_2020_paper.pdf
http://arxiv.org/abs/2005.06821
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Tang_A_Semi-Supervised_Assessor_of_Neural_Architectures_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Tang_A_Semi-Supervised_Assessor_of_Neural_Architectures_CVPR_2020_paper.html
CVPR 2020
null
null
null
Proxy Anchor Loss for Deep Metric Learning
Sungyeon Kim, Dongwon Kim, Minsu Cho, Suha Kwak
Existing metric learning losses can be categorized into two classes: pair-based and proxy-based losses. The former class can leverage fine-grained semantic relations between data points, but slows convergence in general due to its high training complexity. In contrast, the latter class enables fast and reliable convergence, but cannot consider the rich data-to-data relations. This paper presents a new proxy-based loss that takes advantages of both pair- and proxy-based methods and overcomes their limitations. Thanks to the use of proxies, our loss boosts the speed of convergence and is robust against noisy labels and outliers. At the same time, it allows embedding vectors of data to interact with each other in its gradients to exploit data-to-data relations. Our method is evaluated on four public benchmarks, where a standard network trained with our loss achieves state-of-the-art performance and most quickly converges.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kim_Proxy_Anchor_Loss_for_Deep_Metric_Learning_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.13911
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Kim_Proxy_Anchor_Loss_for_Deep_Metric_Learning_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Kim_Proxy_Anchor_Loss_for_Deep_Metric_Learning_CVPR_2020_paper.html
CVPR 2020
null
null
null
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, Heiko Hoffmann
The unprecedented success of deep neural networks in many applications has made these networks a prime target for adversarial exploitation. In this paper, we introduce a benchmark technique for detecting backdoor attacks (aka Trojan attacks) on deep convolutional neural networks (CNNs). We introduce the concept of Universal Litmus Patterns (ULPs), which enable one to reveal backdoor attacks by feeding these universal patterns to the network and analyzing the output (i.e., classifying the network as `clean' or `corrupted'). This detection is fast because it requires only a few forward passes through a CNN. We demonstrate the effectiveness of ULPs for detecting backdoor attacks on thousands of networks with different architectures trained on four benchmark datasets, namely the German Traffic Sign Recognition Benchmark (GTSRB), MNIST, CIFAR10, and Tiny-ImageNet. The codes and train/test models for this paper can be found here: https://umbcvision.github.io/Universal-Litmus-Patterns/.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kolouri_Universal_Litmus_Patterns_Revealing_Backdoor_Attacks_in_CNNs_CVPR_2020_paper.pdf
http://arxiv.org/abs/1906.10842
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Kolouri_Universal_Litmus_Patterns_Revealing_Backdoor_Attacks_in_CNNs_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Kolouri_Universal_Litmus_Patterns_Revealing_Backdoor_Attacks_in_CNNs_CVPR_2020_paper.html
CVPR 2020
null
null
null
PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes
Rundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, Baoquan Chen
We introduce PQ-NET, a deep neural network which represents and generates 3D shapes via sequential part assembly. The input to our network is a 3D shape segmented into parts, where each part is first encoded into a feature representation using a part autoencoder. The core component of PQ-NET is a sequence-to-sequence or Seq2Seq autoencoder which encodes a sequence of part features into a latent vector of fixed size, and the decoder reconstructs the 3D shape, one part at a time, resulting in a sequential assembly. The latent space formed by the Seq2Seq encoder encodes both part structure and fine part geometry. The decoder can be adapted to perform several generative tasks including shape autoencoding, interpolation, novel shape generation, and single-view 3D reconstruction, where the generated shapes are all composed of meaningful parts.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wu_PQ-NET_A_Generative_Part_Seq2Seq_Network_for_3D_Shapes_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_PQ-NET_A_Generative_Part_Seq2Seq_Network_for_3D_Shapes_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_PQ-NET_A_Generative_Part_Seq2Seq_Network_for_3D_Shapes_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wu_PQ-NET_A_Generative_CVPR_2020_supplemental.pdf
null
null
Extreme Relative Pose Network Under Hybrid Representations
Zhenpei Yang, Siming Yan, Qixing Huang
In this paper, we introduce a novel RGB-D based relative pose estimation approach that is suitable for small-overlapping or non-overlapping scans and can output multiple relative poses. Our method performs scene completion and matches the completed scans. However, instead of using a fixed representation for completion, the key idea is to utilize hybrid representations that combine 360-image, 2D image-based layout, and planar patches. This approach offers adaptively feature representations for relative pose estimation. Besides, we introduce a global-2-local matching procedure, which utilizes initial relative poses obtained during the global phase to detect and then integrate geometric relations for pose refinement. Experimental results justify the potential of this approach across a wide range of benchmark datasets. For example, on ScanNet, the rotation translation errors of the top-1/top-5 predictions of our approach are 28.6^ \circ /0.90m and 16.8^ \circ /0.76m, respectively. Our approach also considerably boosts the performance of multi-scan reconstruction in few-view reconstruction settings.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_Extreme_Relative_Pose_Network_Under_Hybrid_Representations_CVPR_2020_paper.pdf
http://arxiv.org/abs/1912.11695
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Extreme_Relative_Pose_Network_Under_Hybrid_Representations_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Extreme_Relative_Pose_Network_Under_Hybrid_Representations_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yang_Extreme_Relative_Pose_CVPR_2020_supplemental.pdf
null
null
Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline
Yu-Lun Liu, Wei-Sheng Lai, Yu-Sheng Chen, Yi-Lung Kao, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang
Recovering a high dynamic range (HDR) image from a single low dynamic range (LDR) input image is challenging due to missing details in under-/over-exposed regions caused by quantization and saturation of camera sensors. In contrast to existing learning-based methods, our core idea is to incorporate the domain knowledge of the LDR image formation pipeline into our model. We model the HDR-to-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization. We then propose to learn three specialized CNNs to reverse these steps. By decomposing the problem into specific sub-tasks, we impose effective physical constraints to facilitate the training of individual sub-networks. Finally, we jointly fine-tune the entire model end-to-end to reduce error accumulation. With extensive quantitative and qualitative experiments on diverse image datasets, we demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Single-Image_HDR_Reconstruction_by_Learning_to_Reverse_the_Camera_Pipeline_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.01179
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Single-Image_HDR_Reconstruction_by_Learning_to_Reverse_the_Camera_Pipeline_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Single-Image_HDR_Reconstruction_by_Learning_to_Reverse_the_Camera_Pipeline_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Liu_Single-Image_HDR_Reconstruction_CVPR_2020_supplemental.pdf
null
null
Learning to Observe: Approximating Human Perceptual Thresholds for Detection of Suprathreshold Image Transformations
Alan Dolhasz, Carlo Harvey, Ian Williams
Many tasks in computer vision are often calibrated and evaluated relative to human perception. In this paper, we propose to directly approximate the perceptual function performed by human observers completing a visual detection task. Specifically, we present a novel methodology for learning to detect image transformations visible to human observers through approximating perceptual thresholds. To do this, we carry out a subjective two-alternative forced-choice study to estimate perceptual thresholds of human observers detecting local exposure shifts in images. We then leverage transformation equivariant representation learning to overcome issues of limited perceptual data. This representation is then used to train a dense convolutional classifier capable of detecting local suprathreshold exposure shifts - a distortion common to image composites. In this context, our model can approximate perceptual thresholds with an average error of 0.1148 exposure stops between empirical and predicted thresholds. It can also be trained to detect a range of different local transformations.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Dolhasz_Learning_to_Observe_Approximating_Human_Perceptual_Thresholds_for_Detection_of_CVPR_2020_paper.pdf
http://arxiv.org/abs/1912.06433
https://www.youtube.com/watch?v=BpzUgPANkp4
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Dolhasz_Learning_to_Observe_Approximating_Human_Perceptual_Thresholds_for_Detection_of_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Dolhasz_Learning_to_Observe_Approximating_Human_Perceptual_Thresholds_for_Detection_of_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Dolhasz_Learning_to_Observe_CVPR_2020_supplemental.pdf
https://cove.thecvf.com/datasets/333
null
MEBOW: Monocular Estimation of Body Orientation in the Wild
Chenyan Wu, Yukun Chen, Jiajia Luo, Che-Chun Su, Anuja Dawane, Bikramjot Hanzra, Zhuo Deng, Bilan Liu, James Z. Wang, Cheng-hao Kuo
Body orientation estimation provides crucial visual cues in many applications, including robotics and autonomous driving. It is particularly desirable when 3-D pose estimation is difficult to infer due to poor image resolution, occlusion or indistinguishable body parts. We present COCO-MEBOW (Monocular Estimation of Body Orientation in the Wild), a new large-scale dataset for orientation estimation from a single in-the-wild image. The body-orientation labels for around 130K human bodies within 55K images from the COCO dataset have been collected using an efficient and high-precision annotation pipeline. We also validated the benefits of the dataset. First, we show that our dataset can substantially improve the performance and the robustness of a human body orientation estimation model, the development of which was previously limited by the scale and diversity of the available training data. Additionally, we present a novel triple-source solution for 3-D human pose estimation, where 3-D pose labels, 2-D pose labels, and our body-orientation labels are all used in joint training. Our model significantly outperforms state-of-the-art dual-source solutions for monocular 3-D human pose estimation, where training only uses 3-D pose labels and 2-D pose labels. This substantiates an important advantage of MEBOW for 3-D human pose estimation, which is particularly appealing because the per-instance labeling cost for body orientations is far less than that for 3-D poses. The work demonstrates high potential of MEBOW in addressing real-world challenges involving understanding human behaviors. Further information of this work is available at https://chenyanwu.github.io/MEBOW/.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wu_MEBOW_Monocular_Estimation_of_Body_Orientation_in_the_Wild_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_MEBOW_Monocular_Estimation_of_Body_Orientation_in_the_Wild_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_MEBOW_Monocular_Estimation_of_Body_Orientation_in_the_Wild_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wu_MEBOW_Monocular_Estimation_CVPR_2020_supplemental.pdf
null
null
Depth Sensing Beyond LiDAR Range
Kai Zhang, Jiaxin Xie, Noah Snavely, Qifeng Chen
Depth sensing is a critical component of autonomous driving technologies, but today's LiDAR- or stereo camera- based solutions have limited range. We seek to increase the maximum range of self-driving vehicles' depth perception modules for the sake of better safety. To that end, we propose a novel three-camera system that utilizes small field of view cameras. Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration and can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances not well covered by most commercial LiDARs.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Depth_Sensing_Beyond_LiDAR_Range_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.03048
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Depth_Sensing_Beyond_LiDAR_Range_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Depth_Sensing_Beyond_LiDAR_Range_CVPR_2020_paper.html
CVPR 2020
null
null
null
BlendedMVS: A Large-Scale Dataset for Generalized Multi-View Stereo Networks
Yao Yao, Zixin Luo, Shiwei Li, Jingyang Zhang, Yufan Ren, Lei Zhou, Tian Fang, Long Quan
While deep learning has recently achieved great success on multi-view stereo (MVS), limited training data makes the trained model hard to be generalized to unseen scenarios. Compared with other computer vision tasks, it is rather difficult to collect a large-scale MVS dataset as it requires expensive active scanners and labor-intensive process to obtain ground truth 3D structures. In this paper, we introduce BlendedMVS, a novel large-scale dataset, to provide sufficient training ground truth for learning-based MVS. To create the dataset, we apply a 3D reconstruction pipeline to recover high-quality textured meshes from images of well-selected scenes. Then, we render these mesh models to color images and depth maps. To introduce the ambient lighting information during training, the rendered color images are further blended with the input images to generate the training input. Our dataset contains over 17k high-resolution images covering a variety of scenes, including cities, architectures, sculptures and small objects. Extensive experiments demonstrate that BlendedMVS endows the trained model with significantly better generalization ability compared with other MVS datasets. The dataset and pretrained models are available at https://github.com/YoYo000/BlendedMVS.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yao_BlendedMVS_A_Large-Scale_Dataset_for_Generalized_Multi-View_Stereo_Networks_CVPR_2020_paper.pdf
http://arxiv.org/abs/1911.10127
https://www.youtube.com/watch?v=3mMuSwpjd8A
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Yao_BlendedMVS_A_Large-Scale_Dataset_for_Generalized_Multi-View_Stereo_Networks_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Yao_BlendedMVS_A_Large-Scale_Dataset_for_Generalized_Multi-View_Stereo_Networks_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yao_BlendedMVS_A_Large-Scale_CVPR_2020_supplemental.pdf
null
null
Learning to Detect Important People in Unlabelled Images for Semi-Supervised Important People Detection
Fa-Ting Hong, Wei-Hong Li, Wei-Shi Zheng
Important people detection is to automatically detect the individuals who play the most important roles in a social event image, which requires the designed model to understand a high-level pattern. However, existing methods rely heavily on supervised learning using large quantities of annotated image samples, which are more costly to collect for important people detection than for individual entity recognition (i.e., object recognition). To overcome this problem, we propose learning important people detection on partially annotated images. Our approach iteratively learns to assign pseudo-labels to individuals in un-annotated images and learns to update the important people detection model based on data with both labels and pseudo-labels. To alleviate the pseudo-labelling imbalance problem, we introduce a ranking strategy for pseudo-label estimation, and also introduce two weighting strategies: one for weighting the confidence that individuals are important people to strengthen the learning on important people and the other for neglecting noisy unlabelled images (i.e., images without any important people). We have collected two large-scale datasets for evaluation. The extensive experimental results clearly confirm the efficacy of our method attained by leveraging unlabelled images for improving the performance of important people detection.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hong_Learning_to_Detect_Important_People_in_Unlabelled_Images_for_Semi-Supervised_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.07568
https://www.youtube.com/watch?v=9pJnBTRAZfQ
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Hong_Learning_to_Detect_Important_People_in_Unlabelled_Images_for_Semi-Supervised_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Hong_Learning_to_Detect_Important_People_in_Unlabelled_Images_for_Semi-Supervised_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Hong_Learning_to_Detect_CVPR_2020_supplemental.pdf
null
null
Fixed-Point Back-Propagation Training
Xishan Zhang, Shaoli Liu, Rui Zhang, Chang Liu, Di Huang, Shiyi Zhou, Jiaming Guo, Qi Guo, Zidong Du, Tian Zhi, Yunji Chen
Recent emerged quantization technique (i.e., using low bit-width fixed-point data instead of high bit-width floating-point data) has been applied to inference of deep neural networks for fast and efficient execution. However, directly applying quantization in training can cause significant accuracy loss, thus remaining an open challenge. In this paper, we propose a novel training approach, which applies a layer-wise precision-adaptive quantization in deep neural networks. The new training approach leverages our key insight that the degradation of training accuracy is attributed to the dramatic change of data distribution. Therefore, by keeping the data distribution stable through a layer-wise precision-adaptive quantization, we are able to directly train deep neural networks using low bit-width fixed-point data and achieve guaranteed accuracy, without changing hyper parameters. Experimental results on a wide variety of network architectures (e.g., convolution and recurrent networks) and applications (e.g., image classification, object detection, segmentation and machine translation) show that the proposed approach can train these neural networks with negligible accuracy losses (-1.40%-1.3%, 0.02% on average), and speed up training by 252% on a state-of-the-art Intel CPU.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Fixed-Point_Back-Propagation_Training_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Fixed-Point_Back-Propagation_Training_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Fixed-Point_Back-Propagation_Training_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhang_Fixed-Point_Back-Propagation_Training_CVPR_2020_supplemental.pdf
null
null
Zero-Assignment Constraint for Graph Matching With Outliers
Fudong Wang, Nan Xue, Jin-Gang Yu, Gui-Song Xia
Graph matching (GM), as a longstanding problem in computer vision and pattern recognition, still suffers from numerous cluttered outliers in practical applications. To address this issue, we present the zero-assignment constraint (ZAC) for approaching the graph matching problem in the presence of outliers. The underlying idea is to suppress the matchings of outliers by assigning zero-valued vectors to the potential outliers in the obtained optimal correspondence matrix. We provide elaborate theoretical analysis to the problem, i.e., GM with ZAC, and figure out that the GM problem with and without outliers are intrinsically different, which enables us to put forward a sufficient condition to construct valid and reasonable objective function. Consequently, we design an efficient outlier-robust algorithm to significantly reduce the incorrect or redundant matchings caused by numerous outliers. Extensive experiments demonstrate that our method can achieve the state-of-the-art performance in terms of accuracy and efficiency, especially in the presence of numerous outliers.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Zero-Assignment_Constraint_for_Graph_Matching_With_Outliers_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.11928
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Zero-Assignment_Constraint_for_Graph_Matching_With_Outliers_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Zero-Assignment_Constraint_for_Graph_Matching_With_Outliers_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wang_Zero-Assignment_Constraint_for_CVPR_2020_supplemental.pdf
null
null
AvatarMe: Realistically Renderable 3D Facial Reconstruction "In-the-Wild"
Alexandros Lattas, Stylianos Moschoglou, Baris Gecer, Stylianos Ploumpis, Vasileios Triantafyllou, Abhijeet Ghosh, Stefanos Zafeiriou
Over the last years, with the advent of Generative Adversarial Networks (GANs), many face analysis tasks have accomplished astounding performance, with applications including, but not limited to, face generation and 3D face reconstruction from a single "in-the-wild" image. Nevertheless, to the best of our knowledge, there is no method which can produce high-resolution photorealistic 3D faces from "in-the-wild" images and this can be attributed to the: (a) scarcity of available data for training, and (b) lack of robust methodologies that can successfully be applied on very high-resolution data. In this paper, we introduce AvatarMe, the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail. To achieve this, we capture a large dataset of facial shape and reflectance and build on a state-of-the-art 3D texture and shape reconstruction method and successively refine its results, while generating the per-pixel diffuse and specular components that are required for realistic rendering. As we demonstrate in a series of qualitative and quantitative experiments, AvatarMe outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image that, for the first time, bridges the uncanny valley.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lattas_AvatarMe_Realistically_Renderable_3D_Facial_Reconstruction_In-the-Wild_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.13845
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Lattas_AvatarMe_Realistically_Renderable_3D_Facial_Reconstruction_In-the-Wild_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Lattas_AvatarMe_Realistically_Renderable_3D_Facial_Reconstruction_In-the-Wild_CVPR_2020_paper.html
CVPR 2020
null
null
null
Generalized Product Quantization Network for Semi-Supervised Image Retrieval
Young Kyun Jang, Nam Ik Cho
Image retrieval methods that employ hashing or vector quantization have achieved great success by taking advantage of deep learning. However, these approaches do not meet expectations unless expensive label information is sufficient. To resolve this issue, we propose the first quantization-based semi-supervised image retrieval scheme: Generalized Product Quantization (GPQ) network. We design a novel metric learning strategy that preserves semantic similarity between labeled data, and employ entropy regularization term to fully exploit inherent potentials of unlabeled data. Our solution increases the generalization capacity of the quantization network, which allows overcoming previous limitations in the retrieval community. Extensive experimental results demonstrate that GPQ yields state-of-the-art performance on large-scale real image benchmark datasets.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jang_Generalized_Product_Quantization_Network_for_Semi-Supervised_Image_Retrieval_CVPR_2020_paper.pdf
http://arxiv.org/abs/2002.11281
https://www.youtube.com/watch?v=SblaP2hkUpA
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Jang_Generalized_Product_Quantization_Network_for_Semi-Supervised_Image_Retrieval_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Jang_Generalized_Product_Quantization_Network_for_Semi-Supervised_Image_Retrieval_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Jang_Generalized_Product_Quantization_CVPR_2020_supplemental.pdf
null
null
Extremely Dense Point Correspondences Using a Learned Feature Descriptor
Xingtong Liu, Yiping Zheng, Benjamin Killeen, Masaru Ishii, Gregory D. Hager, Russell H. Taylor, Mathias Unberath
High-quality 3D reconstructions from endoscopy video play an important role in many clinical applications, including surgical navigation where they enable direct video-CT registration. While many methods exist for general multi-view 3D reconstruction, these methods often fail to deliver satisfactory performance on endoscopic video. Part of the reason is that local descriptors that establish pair-wise point correspondences, and thus drive reconstruction, struggle when confronted with the texture-scarce surface of anatomy. Learning-based dense descriptors usually have larger receptive fields enabling the encoding of global information, which can be used to disambiguate matches. In this work, we present an effective self-supervised training scheme and novel loss design for dense descriptor learning. In direct comparison to recent local and dense descriptors on an in-house sinus endoscopy dataset, we demonstrate that our proposed dense descriptor can generalize to unseen patients and scopes, thereby largely improving the performance of Structure from Motion (SfM) in terms of model density and completeness. We also evaluate our method on a public dense optical flow dataset and a small-scale SfM public dataset to further demonstrate the effectiveness and generality of our method. The source code is available at https://github.com/lppllppl920/DenseDescriptorLearning-Pytorch.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Extremely_Dense_Point_Correspondences_Using_a_Learned_Feature_Descriptor_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.00619
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Extremely_Dense_Point_Correspondences_Using_a_Learned_Feature_Descriptor_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Extremely_Dense_Point_Correspondences_Using_a_Learned_Feature_Descriptor_CVPR_2020_paper.html
CVPR 2020
null
null
null
Gate-Shift Networks for Video Action Recognition
Swathikiran Sudhakaran, Sergio Escalera, Oswald Lanz
Deep 3D CNNs for video action recognition are designed to learn powerful representations in the joint spatio-temporal feature space. In practice however, because of the large number of parameters and computations involved, they may under-perform in the lack of sufficiently large datasets for training them at scale. In this paper we introduce spatial gating in spatial-temporal decomposition of 3D kernels. We implement this concept with Gate-Shift Module (GSM). GSM is lightweight and turns a 2D-CNN into a highly efficient spatio-temporal feature extractor. With GSM plugged in, a 2D-CNN learns to adaptively route features through time and combine them, at almost no additional parameters and computational overhead. We perform an extensive evaluation of the proposed module to study its effectiveness in video action recognition, achieving state-of-the-art results on Something Something-V1 and Diving48 datasets, and obtaining competitive results on EPIC-Kitchens with far less model complexity.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Sudhakaran_Gate-Shift_Networks_for_Video_Action_Recognition_CVPR_2020_paper.pdf
http://arxiv.org/abs/1912.00381
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Sudhakaran_Gate-Shift_Networks_for_Video_Action_Recognition_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Sudhakaran_Gate-Shift_Networks_for_Video_Action_Recognition_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Sudhakaran_Gate-Shift_Networks_for_CVPR_2020_supplemental.zip
null
null
CRNet: Cross-Reference Networks for Few-Shot Segmentation
Weide Liu, Chi Zhang, Guosheng Lin, Fayao Liu
Over the past few years, state-of-the-art image segmentation algorithms are based on deep convolutional neural networks. To render a deep network with the ability to understand a concept, humans need to collect a large amount of pixel-level annotated data to train the models, which is time-consuming and tedious. Recently, few-shot segmentation is proposed to solve this problem. Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images. In this paper, we propose a cross-reference network (CRNet) for few-shot segmentation. Unlike previous works which only predict the mask in the query image, our proposed model concurrently makes predictions for both the support image and the query image. With a cross-reference mechanism, our network can better find the co-occurrent objects in two images, thus helping the few-shot segmentation task. We also develop a mask refinement module to recurrently refine the prediction of the foreground regions. For the k-shot learning, we propose to finetune parts of networks to take advantage of multiple labeled support images. Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_CRNet_Cross-Reference_Networks_for_Few-Shot_Segmentation_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.10658
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_CRNet_Cross-Reference_Networks_for_Few-Shot_Segmentation_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_CRNet_Cross-Reference_Networks_for_Few-Shot_Segmentation_CVPR_2020_paper.html
CVPR 2020
null
null
null
Space-Time-Aware Multi-Resolution Video Enhancement
Muhammad Haris, Greg Shakhnarovich, Norimichi Ukita
We consider the problem of space-time super-resolution (ST-SR): increasing spatial resolution of video frames and simultaneously interpolating frames to increase the frame rate. Modern approaches handle these axes one at a time. In contrast, our proposed model called STARnet super-resolves jointly in space and time. This allows us to leverage mutually informative relationships between time and space: higher resolution can provide more detailed information about motion, and higher frame-rate can provide better pixel alignment. The components of our model that generate latent low- and high-resolution representations during ST-SR can be used to finetune a specialized mechanism for just spatial or just temporal super-resolution. Experimental results demonstrate that STARnet improves the performances of space-time, spatial, and temporal video super-resolution by substantial margins on publicly available datasets.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Haris_Space-Time-Aware_Multi-Resolution_Video_Enhancement_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.13170
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Haris_Space-Time-Aware_Multi-Resolution_Video_Enhancement_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Haris_Space-Time-Aware_Multi-Resolution_Video_Enhancement_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Haris_Space-Time-Aware_Multi-Resolution_Video_CVPR_2020_supplemental.pdf
null
null
METAL: Minimum Effort Temporal Activity Localization in Untrimmed Videos
Da Zhang, Xiyang Dai, Yuan-Fang Wang
Existing Temporal Activity Localization (TAL) methods largely adopt strong supervision for model training, which requires (1) vast amounts of untrimmed videos per each activity category and (2) accurate segment-level boundary annotations (start time and end time) for every instance. This poses a critical restriction to the current methods in practical scenarios where not only segment-level annotations are expensive to obtain, but many activity categories are also rare and unobserved during training. Therefore, Can we learn a TAL model under weak supervision that can localize unseen activity classes? To address this scenario, we define a novel example-based TAL problem called Minimum Effort Temporal Activity Localization (METAL): Given only a few examples, the goal is to find the occurrences of semantically-related segments in an untrimmed video sequence while model training is only supervised by the video-level annotation. Towards this objective, we propose a novel Similarity Pyramid Network (SPN) that adopts the few-shot learning technique of Relation Network and directly encodes hierarchical multi-scale correlations, which we learn by optimizing two complimentary loss functions in an end-to-end manner. We evaluate the SPN on the THUMOS'14 and ActivityNet datasets, of which we rearrange the videos to fit the METAL setup. Results show that our SPN achieves performance superior or competitive to state-of-the-art approaches with stronger supervision.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_METAL_Minimum_Effort_Temporal_Activity_Localization_in_Untrimmed_Videos_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_METAL_Minimum_Effort_Temporal_Activity_Localization_in_Untrimmed_Videos_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_METAL_Minimum_Effort_Temporal_Activity_Localization_in_Untrimmed_Videos_CVPR_2020_paper.html
CVPR 2020
null
null
null
Image Demoireing with Learnable Bandpass Filters
Bolun Zheng, Shanxin Yuan, Gregory Slabaugh, Ales Leonardis
Image demoireing is a multi-faceted image restoration task involving both texture and color restoration. In this paper, we propose a novel multiscale bandpass convolutional neural network (MBCNN) to address this problem. As an end-to-end solution, MBCNN respectively solves the two sub-problems. For texture restoration, we propose a learnable bandpass filter (LBF) to learn the frequency prior for moire texture removal. For color restoration, we propose a two-step tone mapping strategy, which first applies a global tone mapping to correct for a global color shift, then performs local fine tuning of the color per pixel. Through an ablation study, we demonstrate the effectiveness of the different components of MBCNN. Experimental results on two public datasets show that our method outperforms state-of-the-art methods by a large margin (more than 2dB in terms of PSNR).
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zheng_Image_Demoireing_with_Learnable_Bandpass_Filters_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.00406
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zheng_Image_Demoireing_with_Learnable_Bandpass_Filters_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zheng_Image_Demoireing_with_Learnable_Bandpass_Filters_CVPR_2020_paper.html
CVPR 2020
null
null
null
Active Vision for Early Recognition of Human Actions
Boyu Wang, Lihan Huang, Minh Hoai
We propose a method for early recognition of human actions, one that can take advantages of multiple cameras while satisfying the constraints due to limited communication bandwidth and processing power. Our method considers multiple cameras, and at each time step, it will decide the best camera to use so that a confident recognition decision can be reached as soon as possible. We formulate the camera selection problem as a sequential decision process, and learn a view selection policy based on reinforcement learning. We also develop a novel recurrent neural network architecture to account for the unobserved video frames and the irregular intervals between the observed frames. Experiments on three datasets demonstrate the effectiveness of our approach for early recognition of human actions.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Active_Vision_for_Early_Recognition_of_Human_Actions_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Active_Vision_for_Early_Recognition_of_Human_Actions_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Active_Vision_for_Early_Recognition_of_Human_Actions_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wang_Active_Vision_for_CVPR_2020_supplemental.pdf
null
null
Weakly-Supervised Action Localization by Generative Attention Modeling
Baifeng Shi, Qi Dai, Yadong Mu, Jingdong Wang
Weakly-supervised temporal action localization is a problem of learning an action localization model with only video-level action labeling available. The general framework largely relies on the classification activation, which employs an attention model to identify the action-related frames and then categorizes them into different classes. Such method results in the action-context confusion issue: context frames near action clips tend to be recognized as action frames themselves, since they are closely related to the specific classes. To solve the problem, in this paper we propose to model the class-agnostic frame-wise probability conditioned on the frame attention using conditional Variational Auto-Encoder (VAE). With the observation that the context exhibits notable difference from the action at representation level, a probabilistic model, i.e., conditional VAE, is learned to model the likelihood of each frame given the attention. By maximizing the conditional probability with respect to the attention, the action and non-action frames are well separated. Experiments on THUMOS14 and ActivityNet1.2 demonstrate advantage of our method and effectiveness in handling action-context confusion problem. Code is now available on GitHub.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Shi_Weakly-Supervised_Action_Localization_by_Generative_Attention_Modeling_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.12424
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Shi_Weakly-Supervised_Action_Localization_by_Generative_Attention_Modeling_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Shi_Weakly-Supervised_Action_Localization_by_Generative_Attention_Modeling_CVPR_2020_paper.html
CVPR 2020
null
null
null
Unsupervised Domain Adaptation With Hierarchical Gradient Synchronization
Lanqing Hu, Meina Kan, Shiguang Shan, Xilin Chen
Domain adaptation attempts to boost the performance on a target domain by borrowing knowledge from a well established source domain. To handle the distribution gap between two domains, the prominent approaches endeavor to extract domain-invariant features. It is known that after a perfect domain alignment the domain-invariant representations of two domains should share the same characteristics from perspective of the overview and also any local piece. Inspired by this, we propose a novel method called Hierarchical Gradient Synchronization to model the synchronization relationship among the local distribution pieces and global distribution, aiming for more precise domain-invariant features. Specifically, the hierarchical domain alignments including class-wise alignment, group-wise alignment and global alignment are first constructed. Then, these three types of alignment are constrained to be consistent to ensure better structure preservation. As a result, the obtained features are domain invariant and intrinsically structure preserved. As evaluated on extensive domain adaptation tasks, our proposed method achieves state-of-the-art classification performance on both vanilla unsupervised domain adaptation and partial domain adaptation.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hu_Unsupervised_Domain_Adaptation_With_Hierarchical_Gradient_Synchronization_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Hu_Unsupervised_Domain_Adaptation_With_Hierarchical_Gradient_Synchronization_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Hu_Unsupervised_Domain_Adaptation_With_Hierarchical_Gradient_Synchronization_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Hu_Unsupervised_Domain_Adaptation_CVPR_2020_supplemental.pdf
null
null
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects From a Single RGB Image
Despoina Paschalidou, Luc Van Gool, Andreas Geiger
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Paschalidou_Learning_Unsupervised_Hierarchical_Part_Decomposition_of_3D_Objects_From_a_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.01176
https://www.youtube.com/watch?v=Wy6PDTS4_JQ
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Paschalidou_Learning_Unsupervised_Hierarchical_Part_Decomposition_of_3D_Objects_From_a_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Paschalidou_Learning_Unsupervised_Hierarchical_Part_Decomposition_of_3D_Objects_From_a_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Paschalidou_Learning_Unsupervised_Hierarchical_CVPR_2020_supplemental.pdf
null
null
Light Field Spatial Super-Resolution via Deep Combinatorial Geometry Embedding and Structural Consistency Regularization
Jing Jin, Junhui Hou, Jie Chen, Sam Kwong
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution as the limited sampling resources have to be shared with the angular dimension. LF spatial super-resolution (SR) thus becomes an indispensable part of the LF camera processing pipeline. The high-dimensionality characteristic and complex geometrical structure of LF images makes the problem more challenging than traditional single-image SR. The performance of existing methods are still limited as they fail to thoroughly explore the coherence among LF views and are insufficient in accurately preserving the parallax structure of the scene. In this paper, we propose a novel learning-based LF spatial SR framework, in which each view of an LF image is first individually super-resolved by exploring the complementary information among views with combinatorial geometry embedding. For accurate preservation of the parallax structure among the reconstructed views, a regularization network trained over a structure-aware loss function is subsequently appended to enforce correct parallax relationships over the intermediate estimation. Our proposed approach is evaluated over datasets with a large number of testing images including both synthetic and real-world scenes. Experimental results demonstrate the advantage of our approach over state-of-the-art methods, i.e., our method not only improves the average PSNR by more than 1.0 dB but also preserves more accurate parallax details, at a lower computation cost.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Jin_Light_Field_Spatial_Super-Resolution_via_Deep_Combinatorial_Geometry_Embedding_and_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.02215
https://www.youtube.com/watch?v=Nhf1Z7N8kDg
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Jin_Light_Field_Spatial_Super-Resolution_via_Deep_Combinatorial_Geometry_Embedding_and_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Jin_Light_Field_Spatial_Super-Resolution_via_Deep_Combinatorial_Geometry_Embedding_and_CVPR_2020_paper.html
CVPR 2020
null
null
null
Auto-Encoding Twin-Bottleneck Hashing
Yuming Shen, Jie Qin, Jiaxin Chen, Mengyang Yu, Li Liu, Fan Zhu, Fumin Shen, Ling Shao
Conventional unsupervised hashing methods usually take advantage of similarity graphs, which are either pre-computed in the high-dimensional space or obtained from random anchor points. On the one hand, existing methods uncouple the procedures of hash function learning and graph construction. On the other hand, graphs empirically built upon original data could introduce biased prior knowledge of data relevance, leading to sub-optimal retrieval performance. In this paper, we tackle the above problems by proposing an efficient and adaptive code-driven graph, which is updated by decoding in the context of an auto-encoder. Specifically, we introduce into our framework twin bottlenecks (i.e., latent variables) that exchange crucial information collaboratively. One bottleneck (i.e., binary codes) conveys the high-level intrinsic data structure captured by the code-driven graph to the other (i.e., continuous variables for low-level detail information), which in turn propagates the updated network feedback for the encoder to learn more discriminative binary codes. The auto-encoding learning objective literally rewards the code-driven graph to learn an optimal encoder. Moreover, the proposed model can be simply optimized by gradient descent without violating the binary constraints. Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods. Our source code can be found at https://github.com/ymcidence/TBH.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Shen_Auto-Encoding_Twin-Bottleneck_Hashing_CVPR_2020_paper.pdf
http://arxiv.org/abs/2002.11930
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Shen_Auto-Encoding_Twin-Bottleneck_Hashing_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Shen_Auto-Encoding_Twin-Bottleneck_Hashing_CVPR_2020_paper.html
CVPR 2020
null
null
null
Learning to Super Resolve Intensity Images From Events
S. Mohammad Mostafavi I., Jonghyun Choi, Kuk-Jin Yoon
An event camera detects per-pixel intensity difference and produces asynchronous event stream with low latency, high dynamic range, and low power consumption. As a trade-off, the event camera has low spatial resolution. We propose an end-to-end network to reconstruct high resolution, high dynamic range (HDR) images directly from the event stream. We evaluate our algorithm on both simulated and real-world sequences and verify that it captures fine details of a scene and outperforms the combination of the state-of-the-art event to image algorithms with the state-of-the-art super resolution schemes in many quantitative measures by large margins. We further extend our method by using the active sensor pixel (APS) frames or reconstructing images iteratively.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/I._Learning_to_Super_Resolve_Intensity_Images_From_Events_CVPR_2020_paper.pdf
http://arxiv.org/abs/1912.01196
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/I._Learning_to_Super_Resolve_Intensity_Images_From_Events_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/I._Learning_to_Super_Resolve_Intensity_Images_From_Events_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/I._Learning_to_Super_CVPR_2020_supplemental.zip
null
null
FaceScape: A Large-Scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction
Haotian Yang, Hao Zhu, Yanru Wang, Mingkai Huang, Qiu Shen, Ruigang Yang, Xun Cao
In this paper, we present a large-scale detailed 3D face dataset, FaceScape, and propose a novel algorithm that is able to predict elaborate riggable 3D face models from a single image input. FaceScape dataset provides 18,760 textured 3D faces, captured from 938 subjects and each with 20 specific expressions. The 3D models contain the pore-level facial geometry that is also processed to be topologically uniformed. These fine 3D facial models can be represented as a 3D morphable model for rough shapes and displacement maps for detailed geometry. Taking advantage of the large-scale and high-accuracy dataset, a novel algorithm is further proposed to learn the expression-specific dynamic details using a deep neural network. The learned relationship serves as the foundation of our 3D face prediction system from a single image input. Different than the previous methods, our predicted 3D models are riggable with highly detailed geometry under different expressions. The unprecedented dataset and code will be released to public for research purpose.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Yang_FaceScape_A_Large-Scale_High_Quality_3D_Face_Dataset_and_Detailed_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.13989
https://www.youtube.com/watch?v=uctztOT8ov4
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_FaceScape_A_Large-Scale_High_Quality_3D_Face_Dataset_and_Detailed_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_FaceScape_A_Large-Scale_High_Quality_3D_Face_Dataset_and_Detailed_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Yang_FaceScape_A_Large-Scale_CVPR_2020_supplemental.zip
https://cove.thecvf.com/datasets/343
null
Cross-View Tracking for Multi-Human 3D Pose Estimation at Over 100 FPS
Long Chen, Haizhou Ai, Rui Chen, Zijie Zhuang, Shuang Liu
Estimating 3D poses of multiple humans in real-time is a classic but still challenging task in computer vision. Its major difficulty lies in the ambiguity in cross-view association of 2D poses and the huge state space when there are multiple people in multiple views. In this paper, we present a novel solution for multi-human 3D pose estimation from multiple calibrated camera views. It takes 2D poses in different camera coordinates as inputs and aims for the accurate 3D poses in the global coordinate. Unlike previous methods that associate 2D poses among all pairs of views from scratch at every frame, we exploit the temporal consistency in videos to match the 2D inputs with 3D poses directly in 3-space. More specifically, we propose to retain the 3D pose for each person and update them iteratively via the cross-view multi-human tracking. This novel formulation improves both accuracy and efficiency, as we demonstrated on widely-used public datasets. To further verify the scalability of our method, we propose a new large-scale multi-human dataset with 12 to 28 camera views. Without bells and whistles, our solution achieves 154 FPS on 12 cameras and 34 FPS on 28 cameras, indicating its ability to handle large-scale real-world applications. The proposed dataset will be released at https://github.com/longcw/crossview_3d_pose_tracking.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Chen_Cross-View_Tracking_for_Multi-Human_3D_Pose_Estimation_at_Over_100_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.03972
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_Cross-View_Tracking_for_Multi-Human_3D_Pose_Estimation_at_Over_100_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_Cross-View_Tracking_for_Multi-Human_3D_Pose_Estimation_at_Over_100_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Chen_Cross-View_Tracking_for_CVPR_2020_supplemental.pdf
null
null
ADINet: Attribute Driven Incremental Network for Retinal Image Classification
Qier Meng, Satoh Shin'ichi
Retinal diseases encompass a variety of types, including different diseases and severity levels. Training a model with different types of disease is impractical. Dynamically training a model is necessary when a patient with a new disease appears. Deep learning techniques have stood out in recent years, but they suffer from catastrophic forgetting, i.e., a dramatic decrease in performance when new training classes appear. We found that keeping the feature distribution of an old model helps maintain the performance of incremental learning. In this paper, we design a framework named "Attribute Driven Incremental Network" (ADINet), a new architecture that integrates class label prediction and attribute prediction into an incremental learning framework to boost the classification performance. With image-level classification, we apply knowledge distillation (KD) to retain the knowledge of base classes. With attribute prediction, we calculate the weight of each attribute of an image and use these weights for more precise attribute prediction. We designed attribute distillation (AD) loss to retain the information of base class attributes as new classes appear. This incremental learning can be performed multiple times with a moderate drop in performance. The results of an experiment on our private retinal fundus image dataset demonstrate that our proposed method outperforms existing state-of-the-art methods. For demonstrating the generalization of our proposed method, we test it on the ImageNet-150K-sub dataset and show good performance.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Meng_ADINet_Attribute_Driven_Incremental_Network_for_Retinal_Image_Classification_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Meng_ADINet_Attribute_Driven_Incremental_Network_for_Retinal_Image_Classification_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Meng_ADINet_Attribute_Driven_Incremental_Network_for_Retinal_Image_Classification_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Meng_ADINet_Attribute_Driven_CVPR_2020_supplemental.pdf
null
null
Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution Remote Sensing Imagery
Zhuo Zheng, Yanfei Zhong, Junjue Wang, Ailong Ma
Geospatial object segmentation, as a particular semantic segmentation task, always faces with larger-scale variation, larger intra-class variance of background, and foreground-background imbalance in the high spatial resolution (HSR) remote sensing imagery. However, general semantic segmentation methods mainly focus on scale variation in the natural scene, with inadequate consideration of the other two problems that usually happen in the large area earth observation scene. In this paper, we argue that the problems lie on the lack of foreground modeling and propose a foreground-aware relation network (FarSeg) from the perspectives of relation-based and optimization-based foreground modeling, to alleviate the above two problems. From perspective of relation, FarSeg enhances the discrimination of foreground features via foreground-correlated contexts associated by learning foreground-scene relation. Meanwhile, from perspective of optimization, a foreground-aware optimization is proposed to focus on foreground examples and hard examples of background during training for a balanced optimization. The experimental results obtained using a large scale dataset suggest that the proposed method is superior to the state-of-the-art general semantic segmentation methods and achieves a better trade-off between speed and accuracy.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zheng_Foreground-Aware_Relation_Network_for_Geospatial_Object_Segmentation_in_High_Spatial_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=wwYE1FR1AWQ
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zheng_Foreground-Aware_Relation_Network_for_Geospatial_Object_Segmentation_in_High_Spatial_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zheng_Foreground-Aware_Relation_Network_for_Geospatial_Object_Segmentation_in_High_Spatial_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zheng_Foreground-Aware_Relation_Network_CVPR_2020_supplemental.pdf
null
null
Single-View View Synthesis With Multiplane Images
Richard Tucker, Noah Snavely
A recent strand of work in view synthesis uses deep learning to generate multiplane images--a camera-centric, layered 3D representation--given two or more input images at known viewpoints. We apply this representation to single-view view synthesis, a problem which is more challenging but has potentially much wider application. Our method learns to predict a multiplane image directly from a single image input, and we introduce scale-invariant view synthesis for supervision, enabling us to train on online video. We show this approach is applicable to several different datasets, that it additionally generates reasonable depth maps, and that it learns to fill in content behind the edges of foreground objects in background layers.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Tucker_Single-View_View_Synthesis_With_Multiplane_Images_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.11364
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Tucker_Single-View_View_Synthesis_With_Multiplane_Images_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Tucker_Single-View_View_Synthesis_With_Multiplane_Images_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Tucker_Single-View_View_Synthesis_CVPR_2020_supplemental.zip
null
null
Select, Supplement and Focus for RGB-D Saliency Detection
Miao Zhang, Weisong Ren, Yongri Piao, Zhengkun Rong, Huchuan Lu
Depth data containing a preponderance of discriminative power in location have been proven beneficial for accurate saliency prediction. However, RGB-D saliency detection methods are also negatively influenced by randomly distributed erroneous or missing regions on the depth map or along the object boundaries. This offers the possibility of achieving more effective inference by well designed models. In this paper, we propose a new framework for accurate RGB-D saliency detection taking account of local and global complementarities from two modalities. This is achieved by designing a complimentary interaction model discriminative enough to simultaneously select useful representation from RGB and depth data, and meanwhile to refine the object boundaries. Moreover, we proposed a compensation-aware loss to further process the information not being considered in the complimentary interaction model, leading to improvement of the generalization ability for challenging scenes. Experiments on six public datasets show that our method outperforms18state-of-the-art methods.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Select_Supplement_and_Focus_for_RGB-D_Saliency_Detection_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Select_Supplement_and_Focus_for_RGB-D_Saliency_Detection_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Select_Supplement_and_Focus_for_RGB-D_Saliency_Detection_CVPR_2020_paper.html
CVPR 2020
null
null
null
Towards Discriminability and Diversity: Batch Nuclear-Norm Maximization Under Label Insufficient Situations
Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, Qi Tian
The learning of the deep networks largely relies on the data with human-annotated labels. In some label insufficient situations, the performance degrades on the decision boundary with high data density. A common solution is to directly minimize the Shannon Entropy, but the side effect caused by entropy minimization, i.e., reduction of the prediction diversity, is mostly ignored. To address this issue, we reinvestigate the structure of classification output matrix of a randomly selected data batch. We find by theoretical analysis that the prediction discriminability and diversity could be separately measured by the Frobenius-norm and rank of the batch output matrix. Besides, the nuclear-norm is an upperbound of the Frobenius-norm, and a convex approximation of the matrix rank. Accordingly, to improve both discriminability and diversity, we propose Batch Nuclear-norm Maximization (BNM) on the output matrix. BNM could boost the learning under typical label insufficient learning scenarios, such as semi-supervised learning, domain adaptation and open domain recognition. On these tasks, extensive experimental results show that BNM outperforms competitors and works well with existing well-known methods. The code is available at https://github.com/cuishuhao/BNM
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Cui_Towards_Discriminability_and_Diversity_Batch_Nuclear-Norm_Maximization_Under_Label_Insufficient_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.12237
https://www.youtube.com/watch?v=n8Y5O3JFdgY
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Cui_Towards_Discriminability_and_Diversity_Batch_Nuclear-Norm_Maximization_Under_Label_Insufficient_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Cui_Towards_Discriminability_and_Diversity_Batch_Nuclear-Norm_Maximization_Under_Label_Insufficient_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Cui_Towards_Discriminability_and_CVPR_2020_supplemental.pdf
null
null
4D Association Graph for Realtime Multi-Person Motion Capture Using Multiple Video Cameras
Yuxiang Zhang, Liang An, Tao Yu, Xiu Li, Kun Li, Yebin Liu
his paper contributes a novel realtime multi-person motion capture algorithm using multiview video inputs. Due to the heavy occlusions and closely interacting motions in each view, joint optimization on the multiview images and multiple temporal frames is indispensable, which brings up the essential challenge of realtime efficiency. To this end, for the first time, we unify per-view parsing, cross-view matching, and temporal tracking into a single optimization framework, i.e., a 4D association graph that each dimension (image space, viewpoint and time) can be treated equally and simultaneously. To solve the 4D association graph efficiently, we further contribute the idea of 4D limb bundle parsing based on heuristic searching, followed with limb bundle assembling by proposing a bundle Kruskal's algorithm. Our method enables a realtime motion capture system running at 30fps using 5 cameras on a 5-person scene. Benefiting from the unified parsing, matching and tracking constraints, our method is robust to noisy detection due to severe occlusions and close interacting motions, and achieves high-quality online pose reconstruction quality. The proposed method outperforms state-of-the-art methods quantitatively without using high-level appearance information.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_4D_Association_Graph_for_Realtime_Multi-Person_Motion_Capture_Using_Multiple_CVPR_2020_paper.pdf
http://arxiv.org/abs/2002.12625
https://www.youtube.com/watch?v=PgWaul71tzE
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_4D_Association_Graph_for_Realtime_Multi-Person_Motion_Capture_Using_Multiple_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_4D_Association_Graph_for_Realtime_Multi-Person_Motion_Capture_Using_Multiple_CVPR_2020_paper.html
CVPR 2020
null
null
null
SDC-Depth: Semantic Divide-and-Conquer Network for Monocular Depth Estimation
Lijun Wang, Jianming Zhang, Oliver Wang, Zhe Lin, Huchuan Lu
Monocular depth estimation is an ill-posed problem, and as such critically relies on scene priors and semantics. Due to its complexity, we propose a deep neural network model based on a semantic divide-and-conquer approach. Our model decomposes a scene into semantic segments, such as object instances and background stuff classes, and then predicts a scale and shift invariant depth map for each semantic segment in a canonical space. Semantic segments of the same category share the same depth decoder, so the global depth prediction task is decomposed into a series of category-specific ones, which are simpler to learn and easier to generalize to new scene types. Finally, our model stitches each local depth segment by predicting its scale and shift based on the global context of the image. The model is trained end-to-end using a multi-task loss for panoptic segmentation and depth prediction, and is therefore able to leverage large-scale panoptic segmentation datasets to boost its semantic understanding. We validate the effectiveness of our approach and show state-of-the-art performance on three benchmark datasets.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_SDC-Depth_Semantic_Divide-and-Conquer_Network_for_Monocular_Depth_Estimation_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=PGE5OqoqP5U
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_SDC-Depth_Semantic_Divide-and-Conquer_Network_for_Monocular_Depth_Estimation_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_SDC-Depth_Semantic_Divide-and-Conquer_Network_for_Monocular_Depth_Estimation_CVPR_2020_paper.html
CVPR 2020
null
null
null
Meshlet Priors for 3D Mesh Reconstruction
Abhishek Badki, Orazio Gallo, Jan Kautz, Pradeep Sen
Estimating a mesh from an unordered set of sparse, noisy 3D points is a challenging problem that requires to carefully select priors. Existing hand-crafted priors, such as smoothness regularizers, impose an undesirable trade-off between attenuating noise and preserving local detail. Recent deep-learning approaches produce impressive results by learning priors directly from the data. However, the priors are learned at the object level, which makes these algorithms class-specific, and even sensitive to the pose of the object. We introduce meshlets, small patches of mesh that we use to learn local shape priors. Meshlets act as a dictionary of local features and thus allow to use learned priors to reconstruct object meshes in any pose and from unseen classes, even when the noise is large and the samples sparse.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Badki_Meshlet_Priors_for_3D_Mesh_Reconstruction_CVPR_2020_paper.pdf
http://arxiv.org/abs/2001.01744
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Badki_Meshlet_Priors_for_3D_Mesh_Reconstruction_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Badki_Meshlet_Priors_for_3D_Mesh_Reconstruction_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Badki_Meshlet_Priors_for_CVPR_2020_supplemental.pdf
null
null
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
Hongjun Wang, Guangrun Wang, Ya Li, Dongyu Zhang, Liang Lin
The success of DNNs has driven the extensive applications of person re-identification (ReID) into a new era. However, whether ReID inherits the vulnerability of DNNs remains unexplored. To examine the robustness of ReID systems is rather important because the insecurity of ReID systems may cause severe losses, e.g., the criminals may use the adversarial perturbations to cheat the CCTV systems. In this work, we examine the insecurity of current best-performing ReID models by proposing a learning-to-mis-rank formulation to perturb the ranking of the system output. As the cross-dataset transferability is crucial in the ReID domain, we also perform a back-box attack by developing a novel multi-stage network architecture that pyramids the features of different levels to extract general and transferable features for the adversarial perturbations. Our method can control the number of malicious pixels by using differentiable multi-shot sampling. To guarantee the inconspicuousness of the attack, we also propose a new perception loss to achieve better visual quality. Extensive experiments on four of the largest ReID benchmarks (i.e., Market1501, CUHK03, DukeMTMC, and MSMT17) not only show the effectiveness of our method, but also provides directions of the future improvement in the robustness of ReID systems. For example, the accuracy of one of the best-performing ReID systems drops sharply from 91.8% to 1.4% after being attacked by our method. Some attack results are shown in Fig. 1. The code is available at: https://github.com/whj363636/Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Transferable_Controllable_and_Inconspicuous_Adversarial_Attacks_on_Person_Re-identification_With_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.04199
https://www.youtube.com/watch?v=R1wx1DVMsNI
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Transferable_Controllable_and_Inconspicuous_Adversarial_Attacks_on_Person_Re-identification_With_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Transferable_Controllable_and_Inconspicuous_Adversarial_Attacks_on_Person_Re-identification_With_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wang_Transferable_Controllable_and_CVPR_2020_supplemental.pdf
null
null
More Grounded Image Captioning by Distilling Image-Text Matching Model
Yuanen Zhou, Meng Wang, Daqing Liu, Zhenzhen Hu, Hanwang Zhang
Visual attention not only improves the performance of image captioners, but also serves as a visual interpretation to qualitatively measure the caption rationality and model transparency. Specifically, we expect that a captioner can fix its attentive gaze on the correct objects while generating the corresponding words. This ability is also known as grounded image captioning. However, the grounding accuracy of existing captioners is far from satisfactory.To improve the grounding accuracy while retaining the captioning quality, it is expensive to collect the word-region alignment as strong supervision.To this end, we propose a Part-of-Speech (POS) enhanced image-text matching model (SCAN): POS-SCAN, as the effective knowledge distillation for more grounded image captioning. The benefits are two-fold: 1) given a sentence and an image, POS-SCAN can ground the objects more accurately than SCAN; 2) POS-SCAN serves as a word-region alignment regularization for the captioner's visual attention module. By showing benchmark experimental results, we demonstrate that conventional image captioners equipped with POS-SCAN can significantly improve the grounding accuracy without strong supervision. Last but not the least, we explore the indispensable Self-Critical Sequence Training (SCST) in the context of grounded image captioning and show that the image-text matching score can serve as a reward for more grounded captioning.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhou_More_Grounded_Image_Captioning_by_Distilling_Image-Text_Matching_Model_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.00390
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_More_Grounded_Image_Captioning_by_Distilling_Image-Text_Matching_Model_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_More_Grounded_Image_Captioning_by_Distilling_Image-Text_Matching_Model_CVPR_2020_paper.html
CVPR 2020
null
null
null
Leveraging Photometric Consistency Over Time for Sparsely Supervised Hand-Object Reconstruction
Yana Hasson, Bugra Tekin, Federica Bogo, Ivan Laptev, Marc Pollefeys, Cordelia Schmid
Modeling hand-object manipulations is essential for understanding how humans interact with their environment. While of practical importance, estimating the pose of hands and objects during interactions is challenging due to the large mutual occlusions that occur during manipulation. Recent efforts have been directed towards fully-supervised methods that require large amounts of labeled training samples. Collecting 3D ground-truth data for hand-object interactions, however, is costly, tedious, and error-prone. To overcome this challenge we present a method to leverage photometric consistency across time when annotations are only available for a sparse subset of frames in a video. Our model is trained end-to-end on color images to jointly reconstruct hands and objects in 3D by inferring their poses. Given our estimated reconstructions, we differentiably render the optical flow between pairs of adjacent images and use it within the network to warp one frame to another. We then apply a self-supervised photometric loss that relies on the visual consistency between nearby images. We achieve state-of-the-art results on 3D hand-object reconstruction benchmarks and demonstrate that our approach allows us to improve the pose estimation accuracy by leveraging information from neighboring frames in low-data regimes.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Hasson_Leveraging_Photometric_Consistency_Over_Time_for_Sparsely_Supervised_Hand-Object_Reconstruction_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.13449
https://www.youtube.com/watch?v=PNwjsAN0ztw
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Hasson_Leveraging_Photometric_Consistency_Over_Time_for_Sparsely_Supervised_Hand-Object_Reconstruction_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Hasson_Leveraging_Photometric_Consistency_Over_Time_for_Sparsely_Supervised_Hand-Object_Reconstruction_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Hasson_Leveraging_Photometric_Consistency_CVPR_2020_supplemental.pdf
null
null
Scene Recomposition by Learning-Based ICP
Hamid Izadinia, Steven M. Seitz
By moving a depth sensor around a room, we compute a 3D CAD model of the environment, capturing the room shape and contents such as chairs, desks, sofas, and tables. Rather than reconstructing geometry, we match, place, and align each object in the scene to thousands of CAD models of objects. In addition to the fully automatic system, the key technical contribution is a novel approach for aligning CAD models to 3D scans, based on deep reinforcement learning. This approach, which we call Learning-based ICP, outperforms prior ICP methods in the literature, by learning the best points to match and conditioning on object viewpoint. LICP learns to align using only synthetic data and does not require ground truth annotation of object pose or keypoint pair matching in real scene scans. While LICP is trained on synthetic data and without 3D real scene annotations, it outperforms both learned local deep feature matching and geometric based alignment methods in real scenes. The proposed method is evaluated on real scenes datasets of SceneNN and ScanNet as well as synthetic scenes of SUNCG. High quality results are demonstrated on a range of real world scenes, with robustness to clutter, viewpoint, and occlusion.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Izadinia_Scene_Recomposition_by_Learning-Based_ICP_CVPR_2020_paper.pdf
http://arxiv.org/abs/1812.05583
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Izadinia_Scene_Recomposition_by_Learning-Based_ICP_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Izadinia_Scene_Recomposition_by_Learning-Based_ICP_CVPR_2020_paper.html
CVPR 2020
null
null
null
JL-DCF: Joint Learning and Densely-Cooperative Fusion Framework for RGB-D Salient Object Detection
Keren Fu, Deng-Ping Fan, Ge-Peng Ji, Qijun Zhao
This paper proposes a novel joint learning and densely-cooperative fusion (JL-DCF) architecture for RGB-D salient object detection. Existing models usually treat RGB and depth as independent information and design separate networks for feature extraction from each. Such schemes can easily be constrained by a limited amount of training data or over-reliance on an elaborately-designed training process. In contrast, our JL-DCF learns from both RGB and depth inputs through a Siamese network. To this end, we propose two effective components: joint learning (JL), and densely-cooperative fusion (DCF). The JL module provides robust saliency feature learning, while the latter is introduced for complementary feature discovery. Comprehensive experiments on four popular metrics show that the designed framework yields a robust RGB-D saliency detector with good generalization. As a result, JL-DCF significantly advances the top-1 D3Net model by an average of 1.9% (S-measure) across six challenging datasets, showing that the proposed framework offers a potential solution for real-world applications and could provide more insight into the cross-modality complementarity task. The code will be available at https://github.com/kerenfu/JLDCF/.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Fu_JL-DCF_Joint_Learning_and_Densely-Cooperative_Fusion_Framework_for_RGB-D_Salient_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=3V6mUtcSvmE
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Fu_JL-DCF_Joint_Learning_and_Densely-Cooperative_Fusion_Framework_for_RGB-D_Salient_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Fu_JL-DCF_Joint_Learning_and_Densely-Cooperative_Fusion_Framework_for_RGB-D_Salient_CVPR_2020_paper.html
CVPR 2020
null
null
null
Inflated Episodic Memory With Region Self-Attention for Long-Tailed Visual Recognition
Linchao Zhu, Yi Yang
There have been increasing interests in modeling long-tailed data. Unlike artificially collected datasets, long-tailed data are naturally existed in the real-world and thus more realistic. To deal with the class imbalance problem, we introduce an Inflated Episodic Memory (IEM) for long-tailed visual recognition. First, our IEM augments the convolutional neural networks with categorical representative features for rapid learning on tail classes. In traditional few-shot learning, a single prototype is usually leveraged to represent a category. However, long-tailed data has higher intra-class variances. It could be challenging to learn a single prototype for one category. Thus, we introduce IEM to store the most discriminative feature for each category individually. Besides, the memory banks are updated independently, which further decreases the chance of learning skewed classifiers. Second, we introduce a novel region self-attention mechanism for multi-scale spatial feature map encoding. It is beneficial to incorporate more discriminative features to improve generalization on tail classes. We propose to encode local feature maps at multiple scales, and the spatial contextual information should be aggregated at the same time. Equipped with IEM and region self-attention, we achieve state-of-the-art performance on four standard long-tailed image recognition benchmarks. Besides, we validate the effectiveness of IEM on a long-tailed video recognition benchmark, i.e., YouTube-8M.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhu_Inflated_Episodic_Memory_With_Region_Self-Attention_for_Long-Tailed_Visual_Recognition_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=UHVXOjaos3s
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhu_Inflated_Episodic_Memory_With_Region_Self-Attention_for_Long-Tailed_Visual_Recognition_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhu_Inflated_Episodic_Memory_With_Region_Self-Attention_for_Long-Tailed_Visual_Recognition_CVPR_2020_paper.html
CVPR 2020
null
null
null
Learning to Select Base Classes for Few-Shot Classification
Linjun Zhou, Peng Cui, Xu Jia, Shiqiang Yang, Qi Tian
Few-shot learning has attracted intensive research attention in recent years. Many methods have been proposed to generalize a model learned from provided base classes to novel classes, but no previous work studies how to select base classes, or even whether different base classes will result in different generalization performance of the learned model. In this paper, we utilize a simple yet effective measure, the Similarity Ratio, as an indicator for the generalization performance of a few-shot model. We then formulate the base class selection problem as a submodular optimization problem over Similarity Ratio. We further provide theoretical analysis on the optimization lower bound of different optimization methods, which could be used to identify the most appropriate algorithm for different experimental settings. The extensive experiments on ImageNet, Caltech256 and CUB-200-2011 demonstrate that our proposed method is effective in selecting a better base dataset.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhou_Learning_to_Select_Base_Classes_for_Few-Shot_Classification_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.00315
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Learning_to_Select_Base_Classes_for_Few-Shot_Classification_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhou_Learning_to_Select_Base_Classes_for_Few-Shot_Classification_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Zhou_Learning_to_Select_CVPR_2020_supplemental.pdf
null
null
Attention-Based Context Aware Reasoning for Situation Recognition
Thilini Cooray, Ngai-Man Cheung, Wei Lu
Situation Recognition (SR) is a fine-grained action recognition task where the model is expected to not only predict the salient action of the image, but also predict values of all associated semantic roles of the action. Predicting semantic roles is very challenging: a vast variety of possibilities can be the match for a semantic role. Existing work has focused on dependency modelling architectures to solve this issue. Inspired by the success achieved by query-based visual reasoning (e.g., Visual Question Answering), we propose to address semantic role prediction as a query-based visual reasoning problem. However, existing query-based reasoning methods have not considered handling of inter-dependent queries which is a unique requirement of semantic role prediction in SR. Therefore, to the best of our knowledge, we propose the first set of methods to address inter-dependent queries in query-based visual reasoning. Extensive experiments demonstrate the effectiveness of our proposed method which achieves outstanding performance on Situation Recognition task. Furthermore, leveraging query inter-dependency, our methods improve upon a state-of-the-art method that answers queries separately. Our code: https://github.com/thilinicooray/context-aware-reasoning-for-sr
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Cooray_Attention-Based_Context_Aware_Reasoning_for_Situation_Recognition_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Cooray_Attention-Based_Context_Aware_Reasoning_for_Situation_Recognition_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Cooray_Attention-Based_Context_Aware_Reasoning_for_Situation_Recognition_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Cooray_Attention-Based_Context_Aware_CVPR_2020_supplemental.pdf
null
null
Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations
Jeet Mohapatra, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. While current verification methods mainly focus on the l_p-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large l_p-norm perturbations, such as color shifting and lighting adjustment, are beyond their capacity. To bridge this gap, we propose Semantify-NN, a model-agnostic and generic robustness verification approach against semantic perturbations for neural networks. By simply inserting our proposed semantic perturbation layers (SP-layers) to the input layer of any given model, Semantify-NN is model-agnostic, and any l_p-norm based verification tools can be used to verify the model robustness against semantic perturbations. We illustrate the principles of designing the SP-layers and provide examples including semantic perturbations to image classification in the space of hue, saturation, lightness, brightness, contrast and rotation, respectively. In addition, an efficient refinement technique is proposed to further significantly improve the semantic certificate. Experiments on various network architectures and different datasets demonstrate the superior verification performance of Semantify-NN over l_p-norm-based verification frameworks that naively convert semantic perturbation to l_p-norm. The results show that Semantify-NN can support robustness verification against a wide range of semantic perturbations.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Mohapatra_Towards_Verifying_Robustness_of_Neural_Networks_Against_A_Family_of_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=Sy274_4IAGk
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Mohapatra_Towards_Verifying_Robustness_of_Neural_Networks_Against_A_Family_of_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Mohapatra_Towards_Verifying_Robustness_of_Neural_Networks_Against_A_Family_of_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Mohapatra_Towards_Verifying_Robustness_CVPR_2020_supplemental.pdf
null
null
Background Matting: The World Is Your Green Screen
Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steven M. Seitz, Ira Kemelmacher-Shlizerman
We propose a method for creating a matte - the per-pixel foreground color and alpha - of a person by taking photos or videos in an everyday setting with a handheld camera. Most existing matting methods require a green screen background or a manually created trimap to produce a good matte. Automatic, trimap-free methods are appearing, but are not of comparable quality. In our trimap free approach, we ask the user to take an additional photo of the background without the subject at the time of capture. This step requires a small amount of foresight but is far less timeconsuming than creating a trimap. We train a deep network with an adversarial loss to predict the matte. We first train a matting network with a supervised loss on ground truth data with synthetic composites. To bridge the domain gap to real imagery with no labeling, we train another matting network guided by the first network and by a discriminator that judges the quality of composites. We demonstrate results on a wide variety of photos and videos and show significant improvement over the state of the art.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Sengupta_Background_Matting_The_World_Is_Your_Green_Screen_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.00626
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Sengupta_Background_Matting_The_World_Is_Your_Green_Screen_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Sengupta_Background_Matting_The_World_Is_Your_Green_Screen_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Sengupta_Background_Matting_The_CVPR_2020_supplemental.pdf
null
null
Learning to Simulate Dynamic Environments With GameGAN
Seung Wook Kim, Yuhao Zhou, Jonah Philion, Antonio Torralba, Sanja Fidler
Simulation is a crucial component of any robotic system. In order to simulate correctly, we need to write complex rules of the environment: how dynamic agents behave, and how the actions of each of the agents affect the behavior of others. In this paper, we aim to learn a simulator by simply watching an agent interact with an environment. We focus on graphics games as a proxy of the real environment. We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training. Given a key pressed by the agent, GameGAN "renders" the next screen using a carefully designed generative adversarial network. Our approach offers key advantages over existing work: we design a memory module that builds an internal map of the environment, allowing for the agent to return to previously visited locations with high visual consistency. In addition, GameGAN is able to disentangle static and dynamic components within an image making the behavior of the model more interpretable, and relevant for downstream tasks that require explicit reasoning over dynamic elements. This enables many interesting applications such as swapping different components of the game to build new games that do not exist. We will release the code and trained model, enabling human players to play generated games and their variations with our GameGAN.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kim_Learning_to_Simulate_Dynamic_Environments_With_GameGAN_CVPR_2020_paper.pdf
http://arxiv.org/abs/2005.12126
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Kim_Learning_to_Simulate_Dynamic_Environments_With_GameGAN_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Kim_Learning_to_Simulate_Dynamic_Environments_With_GameGAN_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Kim_Learning_to_Simulate_CVPR_2020_supplemental.zip
null
null
Active 3D Motion Visualization Based on Spatiotemporal Light-Ray Integration
Fumihiko Sakaue, Jun Sato
In this paper, we propose a method of visualizing 3D motion with zero latency. This method achieves motion visualization by projecting special high-frequency light patterns on moving objects without using any feedback mechanisms. For this objective, we focus on the time integration of light rays in the sensing system of observers. It is known that the visual system of human observers integrates light rays in a certain period. Similarly, the image sensor in a camera integrates light rays during the exposure time. Thus, our method embeds multiple images into a time-varying light field, such that the observer of the time-varying light field observes completely different images according to the dynamic motion of the scene. Based on this concept, we propose a method of generating special high-frequency patterns of projector lights. After projection onto target objects with projectors, the image observed on the target changes automatically depending on the motion of the objects and without any scene sensing and data analysis. In other words, we achieve motion visualization without the time delay incurred during sensing and computing.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Sakaue_Active_3D_Motion_Visualization_Based_on_Spatiotemporal_Light-Ray_Integration_CVPR_2020_paper.pdf
null
https://www.youtube.com/watch?v=In2Iv3vsWC4
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Sakaue_Active_3D_Motion_Visualization_Based_on_Spatiotemporal_Light-Ray_Integration_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Sakaue_Active_3D_Motion_Visualization_Based_on_Spatiotemporal_Light-Ray_Integration_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Sakaue_Active_3D_Motion_CVPR_2020_supplemental.zip
null
null
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation From a Blackbox Model
Dongdong Wang, Yandong Li, Liqiang Wang, Boqing Gong
We study how to train a student deep neural network for visual recognition by distilling knowledge from a blackbox teacher model in a data-efficient manner. Progress on this problem can significantly reduce the dependence on large-scale datasets for learning high-performing visual recognition models. There are two major challenges. One is that the number of queries into the teacher model should be minimized to save computational and/or financial costs. The other is that the number of images used for the knowledge distillation should be small; otherwise, it violates our expectation of reducing the dependence on large-scale datasets. To tackle these challenges, we propose an approach that blends mixup and active learning. The former effectively augments the few unlabeled images by a big pool of synthetic images sampled from the convex hull of the original images, and the latter actively chooses from the pool hard examples for the student neural network and query their labels from the teacher model. We validate our approach with extensive experiments.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Neural_Networks_Are_More_Productive_Teachers_Than_Human_Raters_Active_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.13960
https://www.youtube.com/watch?v=yBO8olcWHvE
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Neural_Networks_Are_More_Productive_Teachers_Than_Human_Raters_Active_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Neural_Networks_Are_More_Productive_Teachers_Than_Human_Raters_Active_CVPR_2020_paper.html
CVPR 2020
null
null
null
Oops! Predicting Unintentional Action in Video
Dave Epstein, Boyuan Chen, Carl Vondrick
From just a short glance at a video, we can often tell whether a person's action is intentional or not. Can we train a model to recognize this? We introduce a dataset of in-the-wild videos of unintentional action, as well as a suite of tasks for recognizing, localizing, and anticipating its onset. We train a supervised neural network as a baseline and analyze its performance compared to human consistency on the tasks. We also investigate self-supervised representations that leverage natural signals in our dataset, and show the effectiveness of an approach that uses the intrinsic speed of video to perform competitively with highly-supervised pretraining. However, a significant gap between machine and human performance remains.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Epstein_Oops_Predicting_Unintentional_Action_in_Video_CVPR_2020_paper.pdf
http://arxiv.org/abs/1911.11206
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Epstein_Oops_Predicting_Unintentional_Action_in_Video_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Epstein_Oops_Predicting_Unintentional_Action_in_Video_CVPR_2020_paper.html
CVPR 2020
null
null
null
Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition
Pengfei Zhang, Cuiling Lan, Wenjun Zeng, Junliang Xing, Jianru Xue, Nanning Zheng
Skeleton-based human action recognition has attracted great interest thanks to the easy accessibility of the human skeleton data. Recently, there is a trend of using very deep feedforward neural networks to model the 3D coordinates of joints without considering the computational efficiency. In this paper, we propose a simple yet effective semantics-guided neural network (SGN) for skeleton-based action recognition. We explicitly introduce the high level semantics of joints (joint type and frame index) into the network to enhance the feature representation capability. In addition, we exploit the relationship of joints hierarchically through two modules, i.e., a joint-level module for modeling the correlations of joints in the same frame and a framelevel module for modeling the dependencies of frames by taking the joints in the same frame as a whole. A strong baseline is proposed to facilitate the study of this field. With an order of magnitude smaller model size than most previous works, SGN achieves the state-of-the-art performance on the NTU60, NTU120, and SYSU datasets.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Zhang_Semantics-Guided_Neural_Networks_for_Efficient_Skeleton-Based_Human_Action_Recognition_CVPR_2020_paper.pdf
http://arxiv.org/abs/1904.01189
https://www.youtube.com/watch?v=ytT0pnuyktE
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Semantics-Guided_Neural_Networks_for_Efficient_Skeleton-Based_Human_Action_Recognition_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Semantics-Guided_Neural_Networks_for_Efficient_Skeleton-Based_Human_Action_Recognition_CVPR_2020_paper.html
CVPR 2020
null
null
null
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
Saehyung Lee, Hyungyu Lee, Sungroh Yoon
Adversarial examples cause neural networks to produce incorrect outputs with high confidence. Although adversarial training is one of the most effective forms of defense against adversarial examples, unfortunately, a large gap exists between test accuracy and training accuracy in adversarial training. In this paper, we identify Adversarial Feature Overfitting (AFO), which may cause poor adversarially robust generalization, and we show that adversarial training can overshoot the optimal point in terms of robust generalization, leading to AFO in our simple Gaussian model. Considering these theoretical results, we present soft labeling as a solution to the AFO problem. Furthermore, we propose Adversarial Vertex mixup (AVmixup), a soft-labeled data augmentation approach for improving adversarially robust generalization. We complement our theoretical analysis with experiments on CIFAR10, CIFAR100, SVHN, and Tiny ImageNet, and show that AVmixup significantly improves the robust generalization performance and that it reduces the trade-off between standard accuracy and adversarial robustness.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Lee_Adversarial_Vertex_Mixup_Toward_Better_Adversarially_Robust_Generalization_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.02484
https://www.youtube.com/watch?v=nxryZ6gCA0M
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Adversarial_Vertex_Mixup_Toward_Better_Adversarially_Robust_Generalization_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Adversarial_Vertex_Mixup_Toward_Better_Adversarially_Robust_Generalization_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Lee_Adversarial_Vertex_Mixup_CVPR_2020_supplemental.pdf
null
null
Learning to Autofocus
Charles Herrmann, Richard Strong Bowen, Neal Wadhwa, Rahul Garg, Qiurui He, Jonathan T. Barron, Ramin Zabih
Autofocus is an important task for digital cameras, yet current approaches often exhibit poor performance. We propose a learning-based approach to this problem, and provide a realistic dataset of sufficient size for effective learning. Our dataset is labeled with per-pixel depths obtained from multi-view stereo, following [9]. Using this dataset, we apply modern deep classification models and an ordinal regression loss to obtain an efficient learning-based autofocus technique. We demonstrate that our approach provides a significant improvement compared with previous learned and non-learned methods: our model reduces the mean absolute error by a factor of 3.6 over the best comparable baseline algorithm. Our dataset and code are publicly available.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Herrmann_Learning_to_Autofocus_CVPR_2020_paper.pdf
http://arxiv.org/abs/2004.12260
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Herrmann_Learning_to_Autofocus_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Herrmann_Learning_to_Autofocus_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Herrmann_Learning_to_Autofocus_CVPR_2020_supplemental.pdf
null
null
Boosting the Transferability of Adversarial Samples via Attention
Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, Yu-Wing Tai
The widespread deployment of deep models necessitates the assessment of model vulnerability in practice, especially for safety- and security-sensitive domains such as autonomous driving and medical diagnosis. Transfer-based attacks against image classifiers thus elicit mounting interest, where attackers are required to craft adversarial images based on local proxy models without the feedback information from remote target ones. However, under such a challenging but practical setup, the synthesized adversarial samples often achieve limited success due to overfitting to the local model employed. In this work, we propose a novel mechanism to alleviate the overfitting issue. It computes model attention over extracted features to regularize the search of adversarial examples, which prioritizes the corruption of critical features that are likely to be adopted by diverse architectures. Consequently, it can promote the transferability of resultant adversarial instances. Extensive experiments on ImageNet classifiers confirm the effectiveness of our strategy and its superiority to state-of-the-art benchmarks in both white-box and black-box settings.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wu_Boosting_the_Transferability_of_Adversarial_Samples_via_Attention_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_Boosting_the_Transferability_of_Adversarial_Samples_via_Attention_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wu_Boosting_the_Transferability_of_Adversarial_Samples_via_Attention_CVPR_2020_paper.html
CVPR 2020
null
null
null
Single Image Reflection Removal Through Cascaded Refinement
Chao Li, Yixiao Yang, Kun He, Stephen Lin, John E. Hopcroft
We address the problem of removing undesirable reflections from a single image captured through a glass surface, which is an ill-posed, challenging but practically important problem for photo enhancement. Inspired by iterative structure reduction for hidden community detection in social networks, we propose an Iterative Boost Convolutional LSTM Network (IBCLN) that enables cascaded prediction for reflection removal. IBCLN is a cascaded network that iteratively refines the estimates of transmission and reflection layers in a manner that they can boost the prediction quality to each other, and information across steps of the cascade is transferred using an LSTM. The intuition is that the transmission is the strong, dominant structure while the reflection is the weak, hidden structure. They are complementary to each other in a single image and thus a better estimate and reduction on one side from the original image leads to a more accurate estimate on the other side. To facilitate training over multiple cascade steps, we employ LSTM to address the vanishing gradient problem, and propose residual reconstruction loss as further training guidance. Besides, we create a dataset of real-world images with reflection and ground-truth transmission layers to mitigate the problem of insufficient data. Comprehensive experiments demonstrate that the proposed method can effectively remove reflections in real and synthetic images compared with state-of-the-art reflection removal methods.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Li_Single_Image_Reflection_Removal_Through_Cascaded_Refinement_CVPR_2020_paper.pdf
http://arxiv.org/abs/1911.06634
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Single_Image_Reflection_Removal_Through_Cascaded_Refinement_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Single_Image_Reflection_Removal_Through_Cascaded_Refinement_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Li_Single_Image_Reflection_CVPR_2020_supplemental.pdf
null
null
Distilled Semantics for Comprehensive Scene Understanding from Videos
Fabio Tosi, Filippo Aleotti, Pierluigi Zama Ramirez, Matteo Poggi, Samuele Salti, Luigi Di Stefano, Stefano Mattoccia
Whole understanding of the surroundings is paramount to autonomous systems. Recent works have shown that deep neural networks can learn geometry (depth) and motion (optical flow) from a monocular video without any explicit supervision from ground truth annotations, particularly hard to source for these two tasks. In this paper, we take an additional step toward holistic scene understanding with monocular cameras by learning depth and motion alongside with semantics, with supervision for the latter provided by a pre-trained network distilling proxy ground truth images. We address the three tasks jointly by a) a novel training protocol based on knowledge distillation and self-supervision and b) a compact network architecture which enables efficient scene understanding on both power hungry GPUs and low-power embedded platforms. We thoroughly assess the performance of our framework and show that it yields state-of-the-art results for monocular depth estimation, optical flow and motion segmentation.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Tosi_Distilled_Semantics_for_Comprehensive_Scene_Understanding_from_Videos_CVPR_2020_paper.pdf
http://arxiv.org/abs/2003.14030
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Tosi_Distilled_Semantics_for_Comprehensive_Scene_Understanding_from_Videos_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Tosi_Distilled_Semantics_for_Comprehensive_Scene_Understanding_from_Videos_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Tosi_Distilled_Semantics_for_CVPR_2020_supplemental.pdf
null
null
Multi-Dimensional Pruning: A Unified Framework for Model Compression
Jinyang Guo, Wanli Ouyang, Dong Xu
In this work, we propose a unified model compression framework called Multi-Dimensional Pruning (MDP) to simultaneously compress the convolutional neural networks (CNNs) on multiple dimensions. In contrast to the existing model compression methods that only aim to reduce the redundancy along either the spatial/spatial-temporal dimension (e.g., spatial dimension for 2D CNNs, spatial and temporal dimensions for 3D CNNs) or the channel dimension, our newly proposed approach can simultaneously reduce the spatial/spatial-temporal and the channel redundancies for CNNs. Specifically, in order to reduce the redundancy along the spatial/spatial-temporal dimension, we downsample the input tensor of a convolutional layer, in which the scaling factor for the downsampling operation is adaptively selected by our approach. After the convolution operation, the output tensor is upsampled to the original size to ensure the unchanged input size for the subsequent CNN layers. To reduce the channel-wise redundancy, we introduce a gate for each channel of the output tensor as its importance score, in which the gate value is automatically learned. The channels with small importance scores will be removed after the model compression process. Our comprehensive experiments on four benchmark datasets demonstrate that our MDP framework outperforms the existing methods when pruning both 2D CNNs and 3D CNNs.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Guo_Multi-Dimensional_Pruning_A_Unified_Framework_for_Model_Compression_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Multi-Dimensional_Pruning_A_Unified_Framework_for_Model_Compression_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Multi-Dimensional_Pruning_A_Unified_Framework_for_Model_Compression_CVPR_2020_paper.html
CVPR 2020
null
null
null
Varicolored Image De-Hazing
Akshay Dudhane, Kuldeep M. Biradar, Prashant W. Patil, Praful Hambarde, Subrahmanyam Murala
The quality of images captured in bad weather is often affected by chromatic casts and low visibility due to the presence of atmospheric particles. Restoration of the color balance is often ignored in most of the existing image de-hazing methods. In this paper, we propose a varicolored end-to-end image de-hazing network which restores the color balance in a given varicolored hazy image and recovers the haze-free image. The proposed network comprises of 1) Haze color correction (HCC) module and 2) Visibility improvement (VI) module. The proposed HCC module provides required attention to each color channel and generates a color balanced hazy image. While the proposed VI module processes the color balanced hazy image through novel inception attention block to recover the haze-free image. We also propose a novel approach to generate a large-scale varicolored synthetic hazy image database. An ablation study has been carried out to demonstrate the effect of different factors on the performance of the proposed network for image de-hazing. Three benchmark synthetic datasets have been used for quantitative analysis of the proposed network. Visual results on a set of real-world hazy images captured in different weather conditions demonstrate the effectiveness of the proposed approach for varicolored image de-hazing.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Dudhane_Varicolored_Image_De-Hazing_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Dudhane_Varicolored_Image_De-Hazing_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Dudhane_Varicolored_Image_De-Hazing_CVPR_2020_paper.html
CVPR 2020
null
null
null
Visually Imbalanced Stereo Matching
Yicun Liu, Jimmy Ren, Jiawei Zhang, Jianbo Liu, Mude Lin
Understanding of human vision system (HVS) has inspired many computer vision algorithms. Stereo matching, which borrows the idea from human stereopsis, has been extensively studied in the existing literature. However, scant attention has been drawn on a typical scenario where binocular inputs are qualitatively different (e.g., high-res master camera and low-res slave camera in a dual-lens module). Recent advances in human optometry reveal the capability of the human visual system to maintain coarse stereopsis under such visually imbalanced conditions. Bionically aroused, it is natural to question that: do stereo machines share the same capability? In this paper, we carry out a systematic comparison to investigate the effect of various imbalanced conditions on current popular stereo matching algorithms. We show that resembling the human visual system, those algorithms can handle limited degrees of monocular downgrading but also prone to collapses beyond a certain threshold. To avoid such collapse, we propose a solution to recover the stereopsis by a joint guided-view-restoration and stereo-reconstruction framework. We show the superiority of our framework on KITTI dataset and its extension on real-world applications.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Liu_Visually_Imbalanced_Stereo_Matching_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Visually_Imbalanced_Stereo_Matching_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Visually_Imbalanced_Stereo_Matching_CVPR_2020_paper.html
CVPR 2020
null
null
null
Defending Against Model Stealing Attacks With Adaptive Misinformation
Sanjay Kariyappa, Moinuddin K. Qureshi
Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which allows a data-limited adversary with no knowledge of the training dataset to clone the functionality of a target model, just by using black-box query access. Such attacks are typically carried out by querying the target model using inputs that are synthetically generated or sampled from a surrogate dataset to construct a labeled dataset. The adversary can use this labeled dataset to train a clone model, which achieves a classification accuracy comparable to that of the target model. We propose "Adaptive Misinformation" to defend against such model stealing attacks. We identify that all existing model stealing attacks invariably query the target model with Out-Of-Distribution (OOD) inputs. By selectively sending incorrect predictions for OOD queries, our defense substantially degrades the accuracy of the attacker's clone model (by up to 40%), while minimally impacting the accuracy (<0.5%) for benign users. Compared to existing defenses, our defense has a significantly better security vs accuracy trade-off and incurs minimal computational overhead.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Kariyappa_Defending_Against_Model_Stealing_Attacks_With_Adaptive_Misinformation_CVPR_2020_paper.pdf
http://arxiv.org/abs/1911.07100
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Kariyappa_Defending_Against_Model_Stealing_Attacks_With_Adaptive_Misinformation_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Kariyappa_Defending_Against_Model_Stealing_Attacks_With_Adaptive_Misinformation_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Kariyappa_Defending_Against_Model_CVPR_2020_supplemental.pdf
null
null
Semi-Supervised Learning for Few-Shot Image-to-Image Translation
Yaxing Wang, Salman Khan, Abel Gonzalez-Garcia, Joost van de Weijer, Fahad Shahbaz Khan
In the last few years, unpaired image-to-image translation has witnessed Remarkable progress. Although the latest methods are able to generate realistic images, they crucially rely on a large number of labeled images. Recently, some methods have tackled the challenging setting of few-shot image-to-image ranslation, reducing the labeled data requirements for the target domain during inference. In this work, we go one step further and reduce the amount of required labeled data also from the source domain during training. To do so, we propose applying semi-supervised learning via a noise-tolerant pseudo-labeling procedure. We also apply a cycle consistency constraint to further exploit the information from unlabeled images, either from the same dataset or external. Additionally, we propose several structural modifications to facilitate the image translation task under these circumstances. Our semi-supervised method for few-shot image translation, called SEMIT, achieves excellent results on four different datasets using as little as 10% of the source labels, and matches the performance of the main fully-supervised competitor using only 20% labeled data. Our code and models are made public at: https://github.com/yaxingwang/SEMIT.
https://openaccess.thecvf.com../../content_CVPR_2020/papers/Wang_Semi-Supervised_Learning_for_Few-Shot_Image-to-Image_Translation_CVPR_2020_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Semi-Supervised_Learning_for_Few-Shot_Image-to-Image_Translation_CVPR_2020_paper.html
https://openaccess.thecvf.com/content_CVPR_2020/html/Wang_Semi-Supervised_Learning_for_Few-Shot_Image-to-Image_Translation_CVPR_2020_paper.html
CVPR 2020
https://openaccess.thecvf.com../../content_CVPR_2020/supplemental/Wang_Semi-Supervised_Learning_for_CVPR_2020_supplemental.pdf
null
null