aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1811.11921
2964667014
3D shape reconstruction from a single image is a highly ill-posed problem. Modern deep learning based systems try to solve this problem by learning an end-to-end mapping from image to shape via a deep network. In this paper, we aim to solve this problem via an online optimization framework inspired by traditional methods. Our framework employs a deep autoencoder to learn a set of latent codes of 3D object shapes, which are fitted by a probabilistic shape prior using Gaussian Mixture Model (GMM). At inference, the shape and pose are jointly optimized guided by both image cues and deep shape prior without relying on an initialization from any trained deep nets. Surprisingly, our method achieves comparable performance to state-of-the-art methods even without training an end-to-end network, which shows a promising step in this direction.
3D understanding of object shapes from images is considered an important step in scene perception. While mapping an environment using Structure from Motion @cite_25 @cite_22 and SLAM @cite_3 @cite_11 facilitates localization and navigation, a higher level of understanding about objects in terms of their shape and relative position with respect to the rest of the background is essential for their manipulation. Initial works, to localize objects while estimating its pose from an image, are limited to the case where a pre-scanned object structure is available @cite_38 @cite_58 @cite_21 .
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_21", "@cite_3", "@cite_58", "@cite_25", "@cite_11" ], "mid": [ "2111158367", "2536680313", "1605166245", "2152671441", "1489107631", "2138835141", "2108134361" ], "abstract": [ "In this article we present the integration of 3-D shape knowledge into a variational model for level set based image segmentation and contour based 3-D pose tracking. Given the surface model of an object that is visible in the image of one or multiple cameras calibrated to the same world coordinate system, the object contour extracted by the segmentation method is applied to estimate the 3-D pose parameters of the object. Vice-versa, the surface model projected to the image plane helps in a top-down manner to improve the extraction of the contour. While common alternative segmentation approaches, which integrate 2-D shape knowledge, face the problem that an object can look very differently from various viewpoints, a 3-D free form model ensures that for each view the model can fit the data in the image very well. Moreover, one additionally solves the problem of determining the object's pose in 3-D space. The performance is demonstrated by numerous experiments with a monocular and a stereo camera system.", "We present a system that can match and reconstruct 3D scenes from extremely large collections of photographs such as those found by searching for a given city (e.g., Rome) on Internet photo sharing sites. Our system uses a collection of novel parallel distributed matching and reconstruction algorithms, designed to maximize parallelism at each stage in the pipeline and minimize serialization bottlenecks. It is designed to scale gracefully with both the size of the problem and the amount of available computation. We have experimented with a variety of alternative algorithms at each stage of the pipeline and report on which ones work best in a parallel computing environment. Our experimental results demonstrate that it is now possible to reconstruct cities consisting of 150K images in less than a day on a cluster with 500 compute cores.", "This article introduces a technique for region-based pose tracking of multiple objects. Our algorithm uses surface models of the objects to be tracked and at least one calibrated camera view, but does not require color, texture, or other additional properties of the objects. By optimizing a joint energy defined on the pose parameters of all objects, the proposed algorithm can explicitly handle occlusions between different objects. Tracking results in simulated as well as real world scenes demonstrate the effects of occlusion and how they are handled by the proposed method.", "We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the \"pure vision\" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera", "We present an approach for estimating the 3D position and in case of articulated objects also the joint configuration from segmented 2D images. The pose estimation without initial information is a challenging optimization problem in a high dimensional space and is essential for texture acquisition and initialization of model-based tracking algorithms. Our method is able to recognize the correct object in the case of multiple objects and estimates its pose with a high accuracy. The key component is a particle-based global optimization method that converges to the global minimum similar to simulated annealing. After detecting potential bounded subsets of the search space, the particles are divided into clusters and migrate to the most attractive cluster as the time increases. The performance of our approach is verified by means of real scenes and a quantative error analysis for image distortions. Our experiments include rigid bodies and full human bodies.", "Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an ill-conditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orthography without computing depth as an intermediate step. An image stream can be represented by the 2FxP measurement matrix of the image coordinates of P points tracked through F frames. We show that under orthographic projection this matrix is of rank 3. Based on this observation, the factorization method uses the singular-value decomposition technique to factor the measurement matrix into two matrices which represent object shape and camera rotation respectively. Two of the three translation components are computed in a preprocessing stage. The method can also handle and obtain a full solution from a partially filled-in measurement matrix that may result from occlusions or tracking failures. The method gives accurate results, and does not introduce smoothing in either shape or motion. We demonstrate this with a series of experiments on laboratory and outdoor image streams, with and without occlusions.", "DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application." ] }
1811.11921
2964667014
3D shape reconstruction from a single image is a highly ill-posed problem. Modern deep learning based systems try to solve this problem by learning an end-to-end mapping from image to shape via a deep network. In this paper, we aim to solve this problem via an online optimization framework inspired by traditional methods. Our framework employs a deep autoencoder to learn a set of latent codes of 3D object shapes, which are fitted by a probabilistic shape prior using Gaussian Mixture Model (GMM). At inference, the shape and pose are jointly optimized guided by both image cues and deep shape prior without relying on an initialization from any trained deep nets. Surprisingly, our method achieves comparable performance to state-of-the-art methods even without training an end-to-end network, which shows a promising step in this direction.
Soon it is realized that the valid 3D shapes of objects belonging to a specific category are highly correlated and dimensionality reduction emerges as a prominent tool to model object shapes. Works like @cite_27 reconstruct traditional image segmentation datasets like PASCAL VOC @cite_4 by extending the ideas of non-rigid Structure from Motion @cite_37 @cite_17 @cite_56 @cite_5 . Methods like @cite_42 propose to use these learned category specific object shape manifolds to reconstruct shape of the object from single image by fitting the reconstructed shape to a single image silhouette and refine it with simple image based methods like shape from shading @cite_9 .
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_9", "@cite_42", "@cite_56", "@cite_27", "@cite_5", "@cite_17" ], "mid": [ "2124600577", "2031489346", "1515194178", "1893912098", "1994804971", "1977792424", "2163394682", "2130398583" ], "abstract": [ "The paper addresses the problem of recovering 3D non-rigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full face and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, C. Tomasi and T. Kanades' (1992) factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three-step process to yield pose, configuration and shape. To the best of our knowledge, this is the first model free approach that can recover from single-view video sequences nonrigid shape models. We demonstrate this new algorithm on several video sequences. We were able to recover 3D non-rigid human face and animal models with high accuracy.", "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", "", "Object reconstruction from a single image - in the wild - is a problem where we can make progress and get meaningful results today. This is the main message of this paper, which introduces an automated pipeline with pixels as inputs and 3D surfaces of various rigid categories as outputs in images of realistic scenes. At the core of our approach are deformable 3D models that can be learned from 2D annotations available in existing object detection datasets, that can be driven by noisy automatic object segmentations and which we complement with a bottom-up module for recovering high-frequency shape details. We perform a comprehensive quantitative analysis and ablation study of our approach using the recently introduced PASCAL 3D+ dataset and show very encouraging automatic reconstructions on PASCAL VOC.", "This paper proposes a simple \"prior-free\" method for solving the non-rigid structure-from-motion (NRSfM) factorization problem. Other than using the fundamental low-order linear combination model assumption, our method does not assume any extra prior knowledge either about the non-rigid structure or about the camera motions. Yet, it works effectively and reliably, producing optimal results, and not suffering from the inherent basis ambiguity issue which plagued most conventional NRSfM factorization methods. Our method is very simple to implement, which involves solving a very small SDP (semi-definite programming) of fixed size, and a nuclear-norm minimization problem. We also present theoretical analysis on the uniqueness and the relaxation gap of our solutions. Extensive experiments on both synthetic and real motion capture data (assuming following the low-order linear combination model) are conducted, which demonstrate that our method indeed outperforms most of the existing non-rigid factorization methods. This work offers not only new theoretical insight, but also a practical, everyday solution to NRSfM.", "We address the problem of populating object category detection datasets with dense, per-object 3D reconstructions, bootstrapped from class labels, ground truth figure-ground segmentations and a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion, then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions on one of the most challenging existing object-category detection datasets, PASCAL VOC. Our results may re-stimulate once popular geometry-oriented model-based recognition approaches.", "This paper offers the first variational approach to the problem of dense 3D reconstruction of non-rigid surfaces from a monocular video sequence. We formulate non-rigid structure from motion (nrsfm) as a global variational energy minimization problem to estimate dense low-rank smooth 3D shapes for every frame along with the camera motion matrices, given dense 2D correspondences. Unlike traditional factorization based approaches to nrsfm, which model the low-rank non-rigid shape using a fixed number of basis shapes and corresponding coefficients, we minimize the rank of the matrix of time-varying shapes directly via trace norm minimization. In conjunction with this low-rank constraint, we use an edge preserving total-variation regularization term to obtain spatially smooth shapes for every frame. Thanks to proximal splitting techniques the optimization problem can be decomposed into many point-wise sub-problems and simple linear systems which can be easily solved on GPU hardware. We show results on real sequences of different objects (face, torso, beating heart) where, despite challenges in tracking, illumination changes and occlusions, our method reconstructs highly deforming smooth surfaces densely and accurately directly from video, without the need for any prior models or shape templates.", "Nonrigid 3D structure-from-motion and 2D optical flow can both be formulated as tensor factorization problems. The two problems can be made equivalent through a noisy affine transform, yielding a combined nonrigid structure-from-intensities problem that we solve via structured matrix decompositions. Often the preconditions for this factorization are violated by image noise and deficiencies of the data visa-vis the sample complexity of the problem. Both issues are remediated with careful use of rank constraints, norm constraints, and integration over uncertainty in the intensity values, yielding novel solutions for SVD under uncertainty, factorization under uncertainty, nonrigid factorization, and subspace optical flow. The resulting integrated algorithm can track and reconstruct in 3D nonrigid surfaces having very little texture, for example the smooth parts of the face. Working with low-resolution low-texture \"found video,\" these methods produce good tracking and 3D reconstruction results where prior algorithms fail." ] }
1811.11921
2964667014
3D shape reconstruction from a single image is a highly ill-posed problem. Modern deep learning based systems try to solve this problem by learning an end-to-end mapping from image to shape via a deep network. In this paper, we aim to solve this problem via an online optimization framework inspired by traditional methods. Our framework employs a deep autoencoder to learn a set of latent codes of 3D object shapes, which are fitted by a probabilistic shape prior using Gaussian Mixture Model (GMM). At inference, the shape and pose are jointly optimized guided by both image cues and deep shape prior without relying on an initialization from any trained deep nets. Surprisingly, our method achieves comparable performance to state-of-the-art methods even without training an end-to-end network, which shows a promising step in this direction.
While the methods mentioned above attempt to learn linear object shape manifolds from 2D key-point correspondences or object segmentation annotations, increasing availability of class specific 3D CAD models and object scans allow researchers to learn complex manifolds for shapes within an object category directly from data. For example, Gaussian Process Latent Variable Model or Kernel simple Principal Component Analysis is employed to learn a compact latent space of object shapes in @cite_32 @cite_41 @cite_51 @cite_24 . Taking advantage of modern deep learning techniques, deep auto-encoders @cite_33 @cite_13 , VAEs @cite_43 , and GANs @cite_16 @cite_12 can be better alternatives to model object shapes than traditional dimensionality reduction tools.
{ "cite_N": [ "@cite_13", "@cite_33", "@cite_41", "@cite_32", "@cite_24", "@cite_43", "@cite_16", "@cite_51", "@cite_12" ], "mid": [ "2025768430", "2100495367", "1794667371", "2071254771", "2510377077", "", "2099471712", "2119493293", "2546066744" ], "abstract": [ "Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.", "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.", "We propose a novel framework for joint 2D segmentation and 3D pose and 3D shape recovery, for images coming from a single monocular source. In the past, integration of all three has proven difficult, largely because of the high degree of ambiguity in the 2D - 3D mapping. Our solution is to learn nonlinear and probabilistic low dimensional latent spaces, using the Gaussian Process Latent Variable Models dimensionality reduction technique. These act as class or activity constraints to a simultaneous and variational segmentation --- recovery --- reconstruction process. We define an image and level set based energy function, which we minimise with respect to 3D pose and shape, 2D segmentation resulting automatically as the projection of the recovered shape under the recovered pose. We represent 3D shapes as zero levels of 3D level set embedding functions, which we project down directly to probabilistic 2D occupancy maps, without the requirement of an intermediary explicit contour stage. Finally, we detail a fast, open-source, GPU-based implementation of our algorithm, which we use to produce results on both real and artificial video sequences.", "Segmentation involves separating an object from the background in a given image. The use of image information alone often leads to poor segmentation results due to the presence of noise, clutter or occlusion. The introduction of shape priors in the geometric active contour (GAC) framework has proved to be an effective way to ameliorate some of these problems. In this work, we propose a novel segmentation method combining image information with prior shape knowledge, using level-sets. Following the work of , we propose to revisit the use of PCA to introduce prior knowledge about shapes in a more robust manner. We utilize kernel PCA (KPCA) and show that this method outperforms linear PCA by allowing only those shapes that are close enough to the training data. In our segmentation framework, shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description permits to fully take advantage of the Kernel PCA methodology and leads to promising segmentation results. In particular, our shape-driven segmentation technique allows for the simultaneous encoding of multiple types of shapes, and offers a convincing level of robustness with respect to noise, occlusions, or smearing.", "Estimating the pose and 3D shape of a large variety of instances within an object class from stereo images is a challenging problem, especially in realistic conditions such as urban street scenes. We propose a novel approach for using compact shape manifolds of the shape within an object class for object segmentation, pose and shape estimation. Our method first detects objects and estimates their pose coarsely in the stereo images using a state-of-the-art 3D object detection method. An energy minimization method then aligns shape and pose concurrently with the stereo reconstruction of the object. In experiments, we evaluate our approach for detection, pose and shape estimation of cars in real stereo images of urban street scenes. We demonstrate that our shape manifold alignment method yields improved results over the initial stereo reconstruction and object detection method in depth and pose accuracy.", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "We propose a formulation of monocular SLAM which combines live dense reconstruction with shape priors-based 3D tracking and reconstruction. Current live dense SLAM approaches are limited to the reconstruction of visible surfaces. Moreover, most of them are based on the minimisation of a photo-consistency error, which usually makes them sensitive to specularities. In the 3D pose recovery literature, problems caused by imperfect and ambiguous image information have been dealt with by using prior shape knowledge. At the same time, the success of depth sensors has shown that combining joint image and depth information drastically increases the robustness of the classical monocular 3D tracking and 3D reconstruction approaches. In this work we link dense SLAM to 3D object pose and shape recovery. More specifically, we automatically augment our SLAM system with object specific identity, together with 6D pose and additional shape degrees of freedom for the object(s) of known class in the scene, combining image data and depth information for the pose and shape recovery. This leads to a system that allows for full scaled 3D reconstruction with the known object(s) segmented from the scene. The segmentation enhances the clarity, accuracy and completeness of the maps built by the dense SLAM system, while the dense 3D data aids the segmentation process, yielding faster and more reliable convergence than when using 2D image data alone.", "We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods." ] }
1811.11921
2964667014
3D shape reconstruction from a single image is a highly ill-posed problem. Modern deep learning based systems try to solve this problem by learning an end-to-end mapping from image to shape via a deep network. In this paper, we aim to solve this problem via an online optimization framework inspired by traditional methods. Our framework employs a deep autoencoder to learn a set of latent codes of 3D object shapes, which are fitted by a probabilistic shape prior using Gaussian Mixture Model (GMM). At inference, the shape and pose are jointly optimized guided by both image cues and deep shape prior without relying on an initialization from any trained deep nets. Surprisingly, our method achieves comparable performance to state-of-the-art methods even without training an end-to-end network, which shows a promising step in this direction.
Recently, with the advance of deep learning, there are many end-to-end systems proposed to solve the object shape reconstruction problem from a single RGB image directly. Different representations are adopted to represent 3D shapes in the deep learning based methods. Volumetric representation is dominant in the early works @cite_48 @cite_34 @cite_57 @cite_18 @cite_26 because it is considered a natural extension of dense 2D segmentation. 3D deconvolutions replace traditional 2D convolutions in the CNNs which are used for image segmentation. Octree @cite_55 @cite_23 and Discrete Cosine Transform @cite_36 are proposed to reduce the memory requirement for predicting high-resolution volumetric grid. Other representations such as Mesh @cite_44 @cite_14 , mutli-view depth maps @cite_49 @cite_29 @cite_20 , and point cloud @cite_0 @cite_15 are also explored. It is shown in @cite_0 @cite_15 that point cloud representation is more efficient than volumetric representation and simpler than mesh and Octree in terms of designing a suitable network architecture and training loss.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_14", "@cite_36", "@cite_48", "@cite_55", "@cite_29", "@cite_57", "@cite_44", "@cite_0", "@cite_23", "@cite_49", "@cite_15", "@cite_34", "@cite_20" ], "mid": [ "2609026071", "", "2778361827", "2768376748", "2342277278", "2606840594", "2964121028", "2762055679", "2963527086", "2560722161", "2603429625", "2951353845", "2963026643", "2964137676", "2894795260" ], "abstract": [ "We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.", "", "Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised semantic learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based approach is proposed in the decoder, which folds a 2D grid onto the underlying 3D object surface of a point cloud. The proposed decoder only uses about 7 parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Finally, this folding-based decoder is interpretable since the reconstruction could be viewed as a fine granular warping from the 2D grid to the point cloud surface.", "One of the long-standing tasks in computer vision is to use a single 2-D view of an object in order to produce its 3-D shape. Recovering the lost dimension in this process has been the goal of classic shape-from-X methods, but often the assumptions made in those works are quite limiting to be useful for general 3-D objects. This problem has been recently addressed with deep learning methods containing a 2-D (convolution) encoder followed by a 3-D (deconvolution) decoder. These methods have been reasonably successful, but memory and run time constraints impose a strong limitation in terms of the resolution of the reconstructed 3-D shapes. In particular, state-of-the-art methods are able to reconstruct 3-D shapes represented by volumes of at most 323 voxels using state-of-the-art desktop computers. In this work, we present a scalable 2-D single view to 3-D volume reconstruction deep learning method, where the 3-D (deconvolution) decoder is replaced by a simple inverse discrete cosine transform (IDCT) decoder. Our simpler architecture has an order of magnitude faster inference when reconstructing 3-D volumes compared to the convolution-deconvolutional model, an exponentially smaller memory complexity while training and testing, and a sub-linear run-time training complexity with respect to the output volume size. We show on benchmark datasets that our method can produce high-resolution reconstructions with state of the art accuracy.", "Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM SLAM methods fail (because of lack of texture and or wide baseline).", "Recently, Convolutional Neural Networks have shown promising results for 3D geometry prediction. They can make predictions from very little input data such as a single color image. A major limitation of such approaches is that they only predict a coarse resolution voxel grid, which does not capture the surface of the objects well. We propose a general framework, called hierarchical surface prediction (HSP), which facilitates prediction of high resolution voxel grids. The main insight is that it is sufficient to predict high resolution voxels around the predicted surfaces. The exterior and interior of the objects can be represented with coarse resolution voxels. Our approach is not dependent on a specific input type. We show results for geometry prediction from color images, depth images and shape completion from partial voxel grids. Our analysis shows that our high resolution predictions are more accurate than low resolution predictions.", "", "Supervised 3D reconstruction has witnessed a significant progress through the use of deep neural networks. However, this increase in performance requires large scale annotations of 2D 3D data. In this paper, we explore inexpensive 2D supervision as an alternative for expensive 3D CAD annotation. Specifically, we use foreground masks as weak supervision through a raytrace pooling layer that enables perspective projection and backpropagation. Additionally, since the 3D reconstruction from masks is an ill posed problem, we propose to constrain the 3D reconstruction to the manifold of unlabeled realistic 3D shapes that match mask observations. We demonstrate that learning a log-barrier solution to this constrained optimization problem resembles the GAN objective, enabling the use of existing tools for training GANs. We evaluate and analyze the manifold constrained reconstruction on various datasets for single and multi-view reconstruction of both synthetic and real images.", "For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.", "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output – point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.", "We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image.", "We present a convolutional network capable of inferring a 3D representation of a previously unseen object given a single image of this object. Concretely, the network can predict an RGB image and a depth map of the object as seen from an arbitrary view. Several of these depth maps fused together give a full point cloud of the object. The point cloud can in turn be transformed into a surface mesh. The network is trained on renderings of synthetic 3D models of cars and chairs. It successfully deals with objects on cluttered background and generates reasonable predictions for real images of cars.", "We address the problem of learning accurate 3D shape and camera pose from a collection of unlabeled category-specific images. We train a convolutional network to predict both the shape and the pose from a single image by minimizing the reprojection error: given several views of an object, the projections of the predicted shapes to the predicted camera poses should match the provided views. To deal with pose ambiguity, we introduce an ensemble of pose predictors that we then distill it to a single student'' model. To allow for efficient learning of high-fidelity shape representation, we represent the shapes by point clouds and devise a formulation allowing for differentiable projection of these. Our experiments show that the distilled ensemble of pose predictors learns to estimate the pose accurately, while the point cloud representation allows to predict detailed shape models.", "What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.", "Some existing CNN-based methods for single-view 3D object reconstruction represent a 3D object as either a 3D voxel occupancy grid or multiple depth-mask image pairs. However, these representations are inefficient since empty voxels or background pixels are wasteful. We propose a novel approach that addresses this limitation by replacing masks with “deformation-fields”. Given a single image at an arbitrary viewpoint, a CNN predicts multiple surfaces, each in a canonical location relative to the object. Each surface comprises a depth-map and corresponding deformation-field that ensures every pixel-depth pair in the depth-map lies on the object surface. These surfaces are then fused to form the full 3D shape. During training we use a combination of per-view loss and multi-view losses. The novel multi-view loss encourages the 3D points back-projected from a particular view to be consistent across views. Extensive experiments demonstrate the efficiency and efficacy of our method on single-view 3D object reconstruction." ] }
1811.11921
2964667014
3D shape reconstruction from a single image is a highly ill-posed problem. Modern deep learning based systems try to solve this problem by learning an end-to-end mapping from image to shape via a deep network. In this paper, we aim to solve this problem via an online optimization framework inspired by traditional methods. Our framework employs a deep autoencoder to learn a set of latent codes of 3D object shapes, which are fitted by a probabilistic shape prior using Gaussian Mixture Model (GMM). At inference, the shape and pose are jointly optimized guided by both image cues and deep shape prior without relying on an initialization from any trained deep nets. Surprisingly, our method achieves comparable performance to state-of-the-art methods even without training an end-to-end network, which shows a promising step in this direction.
Another crucial aspect that affects the performance of a deep learning solution for shape reconstruction is the role of object pose. One stream of research @cite_0 @cite_26 proposes to incorporate pose implicitly into the shape reconstruction network where the predicted shape is aligned with the input image. In another stream, it is shown in @cite_6 @cite_47 that decoupling the shape and pose while reasoning about object structure from single image has performance benefits @cite_6 @cite_47 . For instance, a canonical shape embedding in low-dimensional space is employed in @cite_34 @cite_12 . Besides the fully-supervised training used in @cite_34 @cite_48 @cite_0 , researchers also explore weakly supervised training of the network without the explicit 3D loss. For instance, In Yan al @cite_45 , Tulsiani al @cite_18 , a 3D shape generated by the network is supervised by its projections at multiple viewpoints with the corresponding ground-truth silhouettes. In particular, Zhu al @cite_6 mitigate the domain gap between synthetic images used in training and real images at test by finetuning their network on a small amount of real images with ground-truth silhouette annotation. The weakly supervised training loss shares similarity with our silhouette-shape constraint during our inference.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_48", "@cite_6", "@cite_0", "@cite_45", "@cite_47", "@cite_34", "@cite_12" ], "mid": [ "2609026071", "", "2342277278", "2963641844", "2560722161", "2551540143", "2962988048", "2964137676", "2546066744" ], "abstract": [ "We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.", "", "Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM SLAM methods fail (because of lack of texture and or wide baseline).", "An emerging problem in computer vision is the reconstruction of 3D shape and pose of an object from a single image. Hitherto, the problem has been addressed through the application of canonical deep learning methods to regress from the image directly to the 3D shape and pose labels. These approaches, however, are problematic from two perspectives. First, they are minimizing the error between 3D shapes and pose labels - with little thought about the nature of this “label error” when reprojecting the shape back onto the image. Second, they rely on the onerous and ill-posed task of hand labeling natural images with respect to 3D shape and pose. In this paper we define the new task of pose-aware shape reconstruction from a single image, and we advocate that cheaper 2D annotations of objects silhouettes in natural images can be utilized. We design architectures of pose-aware shape reconstruction which reproject the predicted shape back on to the image using the predicted pose. Our evaluation on several object categories demonstrates the superiority of our method for predicting pose-aware 3D shapes from natural images.", "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output – point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.", "Understanding the 3D world is a fundamental problem in computer vision. However, learning a good representation of 3D objects is still an open problem due to the high dimensionality of the data and many factors of variation involved. In this work, we investigate the task of single-view 3D object reconstruction from a learning agent's perspective. We formulate the learning process as an interaction between 3D and 2D representations and propose an encoder-decoder network with a novel projection loss defined by the projective transformation. More importantly, the projection loss enables the unsupervised learning using 2D observation without explicit 3D supervision. We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes. Results show superior performance and better generalization ability for 3D object reconstruction when the projection loss is involved.", "We study 3D shape modeling from a single image and make contributions to it in three aspects. First, we present Pix3D, a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc. Building such a large-scale dataset, however, is highly challenging; existing datasets either contain only synthetic data, or lack precise alignment between 2D images and 3D shapes, or only have a small number of images. Second, we calibrate the evaluation criteria for 3D shape reconstruction through behavioral studies, and use them to objectively and systematically benchmark cutting-edge reconstruction algorithms on Pix3D. Third, we design a novel model that simultaneously performs 3D reconstruction and pose estimation; our multi-task learning approach achieves state-of-the-art performance on both tasks.", "What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.", "We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods." ] }
1811.11921
2964667014
3D shape reconstruction from a single image is a highly ill-posed problem. Modern deep learning based systems try to solve this problem by learning an end-to-end mapping from image to shape via a deep network. In this paper, we aim to solve this problem via an online optimization framework inspired by traditional methods. Our framework employs a deep autoencoder to learn a set of latent codes of 3D object shapes, which are fitted by a probabilistic shape prior using Gaussian Mixture Model (GMM). At inference, the shape and pose are jointly optimized guided by both image cues and deep shape prior without relying on an initialization from any trained deep nets. Surprisingly, our method achieves comparable performance to state-of-the-art methods even without training an end-to-end network, which shows a promising step in this direction.
Following the best practices in deep learning regime, we choose to use point cloud as shape representation, decouple shape and pose by reconstructing 3D shapes in a canonical pose and estimating object pose separately, and a two-step training schedule for shape reconstruction network used TL-embedding network @cite_34 , where a latent space for 3D shapes is learned explicitly via an auto-encoder and then an image regressor that maps an RGB image to the learned latent space is trained. On top of these deep learning components, we use the predicted silhouette and learned shape prior to transform the one-shot prediction from deep networks to an optimization at inference time.
{ "cite_N": [ "@cite_34" ], "mid": [ "2964137676" ], "abstract": [ "What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding." ] }
1811.11921
2964667014
3D shape reconstruction from a single image is a highly ill-posed problem. Modern deep learning based systems try to solve this problem by learning an end-to-end mapping from image to shape via a deep network. In this paper, we aim to solve this problem via an online optimization framework inspired by traditional methods. Our framework employs a deep autoencoder to learn a set of latent codes of 3D object shapes, which are fitted by a probabilistic shape prior using Gaussian Mixture Model (GMM). At inference, the shape and pose are jointly optimized guided by both image cues and deep shape prior without relying on an initialization from any trained deep nets. Surprisingly, our method achieves comparable performance to state-of-the-art methods even without training an end-to-end network, which shows a promising step in this direction.
Similar to our framework, CodeSLAM @cite_50 , MarrNet @cite_26 , and Zhu al @cite_39 also use a low-dimensional latent space to represent the geometry information. For the scene-level geometry reconstruction, CodeSLAM uses nearby views to form a photometric loss, which is minimized by searching in the learned latent space of depth maps. MarrNet @cite_31 and Zhu al @cite_39 enforce the 2.5D (silhouette, depth or normal) - 3D shape constraint and photometric loss respectively to search in the latent space of object shapes. Nevertheless, a common disadvantage of these works, which we address in our work, is that they do not consider a probabilistic prior in the latent space explicitly when they perform optimization, which we show is essential for the optimization at inference time through our ablation study.
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_50", "@cite_39" ], "mid": [ "2791804052", "", "2963706662", "2767959606" ], "abstract": [ "Understanding 3D object structure from a single image is an important but challenging task in computer vision, mostly due to the lack of 3D object annotations to real images. Previous research tackled this problem by either searching for a 3D shape that best explains 2D annotations, or training purely on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Networks (3D-INN), an end-to-end trainable framework that sequentially estimates 2D keypoint heatmaps and 3D object skeletons and poses. Our system learns from both 2D-annotated real images and synthetic 3D data. This is made possible mainly by two technical innovations. First, heatmaps of 2D keypoints serve as an intermediate representation to connect real and synthetic data. 3D-INN is trained on real images to estimate 2D keypoint heatmaps from an input image; it then predicts 3D object structure from heatmaps using knowledge learned from synthetic 3D shapes. By doing so, 3D-INN benefits from the variation and abundance of synthetic 3D objects, without suffering from the domain difference between real and synthesized images, often due to imperfect rendering. Second, we propose a Projection Layer, mapping estimated 3D structure back to 2D. During training, it ensures 3D-INN to predict 3D structure whose projection is consistent with the 2D annotations to real images. Experiments show that the proposed system performs well on both 2D keypoint estimation and 3D structure recovery. We also demonstrate that the recovered 3D information has wide vision applications, such as image retrieval.", "", "The representation of geometry in real-time 3D perception systems continues to be a critical research issue. Dense maps capture complete surface shape and can be augmented with semantic labels, but their high dimensionality makes them computationally costly to store and process, and unsuitable for rigorous probabilistic inference. Sparse feature-based representations avoid these problems, but capture only partial scene information and are mainly useful for localisation only. We present a new compact but dense representation of scene geometry which is conditioned on the intensity data from a single image and generated from a code consisting of a small number of parameters. We are inspired by work both on learned depth from images, and auto-encoders. Our approach is suitable for use in a keyframe-based monocular dense SLAM system: While each keyframe with a code can produce a depth map, the code can be optimised efficiently jointly with pose variables and together with the codes of overlapping keyframes to attain global consistency. Conditioning the depth map on the image allows the code to only represent aspects of the local geometry which cannot directly be predicted from the image. We explain how to learn our code representation, and demonstrate its advantageous properties in monocular SLAM.", "Reconstructing 3D shapes from a sequence of images has long been a problem of interest in computer vision. Classical Structure from Motion (SfM) methods have attempted to solve this problem through projected point displacement & bundle adjustment. More recently, deep methods have attempted to solve this problem by directly learning a relationship between geometry and appearance. There is, however, a significant gap between these two strategies. SfM tackles the problem from purely a geometric perspective, taking no account of the object shape prior. Modern deep methods more often throw away geometric constraints altogether, rendering the results unreliable. In this paper we make an effort to bring these two seemingly disparate strategies together. We introduce learned shape prior in the form of deep shape generators into Photometric Bundle Adjustment (PBA) and propose to accommodate full 3D shape generated by the shape prior within the optimization-based inference framework, demonstrating impressive results." ] }
1811.12043
2903251150
Attention mechanisms are a design trend of deep neural networks that stands out in various computer vision tasks. Recently, some works have attempted to apply attention mechanisms to single image super-resolution (SR) tasks. However, they apply the mechanisms to SR in the same or similar ways used for high-level computer vision problems without much consideration of the different nature between SR and other problems. In this paper, we propose a new attention method, which is composed of new channel-wise and spatial attention mechanisms optimized for SR and a new fused attention to combine them. Based on this, we propose a new residual attention module (RAM) and a SR network using RAM (SRRAM). We provide in-depth experimental analysis of different attention mechanisms in SR. It is shown that the proposed method can construct both deep and lightweight SR networks showing improved performance in comparison to existing state-of-the-art methods.
Similarly to CBAM, the CSAR block @cite_25 includes both CA and SA. The former is equal to that of RCAB. For SA, in contrast to CBAM, the input feature map proceeds to the process without going through the process. The process employs two @math convolutions, where the first one has @math filters and the second one has a single filter. Here, @math is the increase ratio. While CBAM combines the two attention mechanisms sequentially, the CSAR block combines them in a parallel manner using concatenation and @math convolution.
{ "cite_N": [ "@cite_25" ], "mid": [ "2892998444" ], "abstract": [ "The performance of single image super-resolution has achieved significant improvement by utilizing deep convolutional neural networks (CNNs). The features in deep CNN contain different types of information which make different contributions to image reconstruction. However, most CNN-based models lack discriminative ability for different types of information and deal with them equally, which results in the representational capacity of the models being limited. On the other hand, as the depth of neural networks grows, the long-term information coming from preceding layers is easy to be weaken or lost in late layers, which is adverse to super-resolving image. To capture more informative features and maintain long-term information for image super-resolution, we propose a channel-wise and spatial feature modulation (CSFM) network in which a sequence of feature-modulation memory (FMM) modules is cascaded with a densely connected structure to transform low-resolution features to high informative features. In each FMM module, we construct a set of channel-wise and spatial attention residual (CSAR) blocks and stack them in a chain structure to dynamically modulate multi-level features in a global-and-local manner. This feature modulation strategy enables the high contribution information to be enhanced and the redundant information to be suppressed. Meanwhile, for long-term information persistence, a gated fusion (GF) node is attached at the end of the FMM module to adaptively fuse hierarchical features and distill more effective information via the dense skip connections and the gating mechanism. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over the state-of-the-art methods." ] }
1811.11874
2963959798
Retinal template matching and registration is an important challenge in teleophthalmology with low-cost imaging devices. However, the images from such devices generally have a small field of view (FOV) and image quality degradations, making matching difficult. In this paper, we develop an efficient and accurate retinal matching technique that combines dimension reduction and mutual information (MI), called RetinaMatch. The dimension reduction initializes the MI optimization as a coarse localization process, which narrows the optimization domain and avoids local optima. The effectiveness of RetinaMatch is demonstrated on the open fundus image database STARE with simulated reduced FOV and anticipated degradations, and on retinal images acquired by adapter-based optics attached to a smartphone. RetinaMatch achieves a success rate over 94 on human retinal images with the matched target registration errors below 2 pixels on average, excluding the observer variability, outperforming standard template matching solutions. In the application of measuring vessel diameter repeatedly, single pixel errors are expected. In addition, our method can be used in the process of image mosaicking with area-based registration, providing a robust approach when feature-based methods fail. To the best of our knowledge, this is the first template matching algorithm for retina images with small template images from unconstrained retinal areas. In the context of the emerging mixed reality market, we envision automated retinal image matching and registration methods as transformative for advanced teleophthalmology and long-term retinal monitoring.
Much of the foundational work on template matching of retinal images is based on more general image registration methods, which have been comprehensively studied in recent years. However, general retina registration methods focus on matching image pairs that both have a large FOV with local deformations or different image modalities. The existing retinal template matching algorithms are limited to detecting specific objects from the image, where the template always contains a certain feature, such as the optic disc, exudate and artifacts @cite_35 @cite_1 @cite_21 .
{ "cite_N": [ "@cite_35", "@cite_21", "@cite_1" ], "mid": [ "2105172918", "2040003485", "2010099973" ], "abstract": [ "The optic disk (OD) center and margin are typically requisite landmarks in establishing a frame of reference for classifying retinal and optic nerve pathology. Reliable and efficient OD localization and segmentation are important tasks in automatic eye disease screening. This paper presents a new, fast, and fully automatic OD localization and segmentation algorithm developed for retinal disease screening. First, OD location candidates are identified using template matching. The template is designed to adapt to different image resolutions. Then, vessel characteristics (patterns) on the OD are used to determine OD location. Initialized by the detected OD center and estimated OD radius, a fast, hybrid level-set model, which combines region and local gradient information, is applied to the segmentation of the disk boundary. Morphological filtering is used to remove blood vessels and bright regions other than the OD that affect segmentation in the peripapillary region. Optimization of the model parameters and their effect on the model performance are considered. Evaluation was based on 1200 images from the publicly available MESSIDOR database. The OD location methodology succeeded in 1189 out of 1200 images (99 success). The average mean absolute distance between the segmented boundary and the reference standard is 10 of the estimated OD radius for all image sizes. Its efficiency, robustness, and accuracy make the OD localization and segmentation scheme described herein suitable for automatic retinal disease screening in a variety of clinical settings.", "The continuous development of automatic retinal diseases diagnosis systems based on image processing has shown their potential for clinical practice. However, the accuracy of these systems is often compromised, mainly due to the intrinsic difficulty in detecting the abnormal structures and also due to deficiencies in the image acquisition which affects image quality. Light flares are one of such deficiencies that usually don't compromise the overall image quality, but can be misclassified by an automatic diagnosis system. In this article a method is proposed for detecting light artifacts (flares) on retinal images. The output from the light artifact detection is a binary image mask that is useful to reject those pixels from being further processed. The proposed method uses a template matching algorithm to detect artifacts similar to the predefined template artifact images. Two main types of light artifacts were identified: light flares and the central artifact. To reduce over-segmentation the light artifact candidates are characterized by their shape and color and are classified by a decision tree. The method was developed using a dataset of 61 images from which 20 were used for the classifier training and the remaining 41 for independent testing. With the test dataset the method obtained an average sensitivity false detection per image pairs of 0.97 0.12 for the central artifact and 0.73 0.36 for the light flares, what were considered good results regarding the heterogeneity of the dataset which contain low and high quality images.", "The automatic detection of exudates in colour eye fundus images is an important task in applications such as diabetic retinopathy screening. The presented work has been undertaken in the framework of the TeleOphta project, whose main objective is to auto-matically detect normal exams in a tele-ophthalmology network, thus reducing the burden on the readers. A new clinical database, e-ophtha EX, containing precisely manually contoured exudates, is introduced. As opposed to previously available databases, e-ophtha EX is very heterogeneous. It contains images gathered within the OPHDIAT telemedicine network for diabetic retinopathy screening. Image definition, quality, as well as patients condition or the retinograph used for the acquisition, for example, are subject to important changes between different examinations. The proposed exudate detection method has been designed for this complex situation. We propose new preprocessing methods, which perform not only normalization and denoising tasks, but also de-tect reflections and artifacts in the image. A new candidates segmentation method, based on mathematical morphology, is proposed. These candidates are characterized using classical features, but also novel contextual features. Finally, a random forest algorithm is used to detect the exudates among the candidates. The method has been validated on the e-ophtha EX database, obtaining an AUC of 0.95. It has been also validated on other databases, obtaining an AUC between 0.93 and 0.95, outperforming state-of-the-art methods." ] }
1811.11874
2963959798
Retinal template matching and registration is an important challenge in teleophthalmology with low-cost imaging devices. However, the images from such devices generally have a small field of view (FOV) and image quality degradations, making matching difficult. In this paper, we develop an efficient and accurate retinal matching technique that combines dimension reduction and mutual information (MI), called RetinaMatch. The dimension reduction initializes the MI optimization as a coarse localization process, which narrows the optimization domain and avoids local optima. The effectiveness of RetinaMatch is demonstrated on the open fundus image database STARE with simulated reduced FOV and anticipated degradations, and on retinal images acquired by adapter-based optics attached to a smartphone. RetinaMatch achieves a success rate over 94 on human retinal images with the matched target registration errors below 2 pixels on average, excluding the observer variability, outperforming standard template matching solutions. In the application of measuring vessel diameter repeatedly, single pixel errors are expected. In addition, our method can be used in the process of image mosaicking with area-based registration, providing a robust approach when feature-based methods fail. To the best of our knowledge, this is the first template matching algorithm for retina images with small template images from unconstrained retinal areas. In the context of the emerging mixed reality market, we envision automated retinal image matching and registration methods as transformative for advanced teleophthalmology and long-term retinal monitoring.
Area-based approaches match the intensity differences of an image pair under a similarity measure, such as SSD (sum of squared differences) @cite_3 , CC (Cross-Correlation) @cite_37 and MI (mutual information) @cite_30 , then optimize the similarity measure by searching in the transformation space. Avoiding pixel-level feature detection, such approaches are more robust to poor quality images than feature-based approaches. However, retina images with sparse features and similar backgrounds are likely to lead the optimization into local extrema. Fig. shows an example of the area-based method with three similarity measures. The small template image is captured by the adapter-based optics which is registered onto a full fundus image. Both of the images are acquired by the same modality. SSD and normalized CC (NCC) do not have an obvious peak at the alignment position (0,0), giving no clear information on the alignment quality. Normalized MI (NMI) shows a maximum at the alignment position, while it still has local extrema which can interfere the global optimization. Besides, when the size difference between the template and full image is too large, the registration with MI can be computationally very expensive.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_3" ], "mid": [ "2000195817", "2004776172", "2063237661" ], "abstract": [ "In the registration of temporal and stereo retinal images, the rotation angle is normally less than 5^o and the scaling factor is between 0.95 and 1.05. Due to sitting constraints in the imaging process, the x translation can be more than 100pixels, but the y translation is usually small. This paper successfully incorporates these constraints in the mutual information-based registration and exploits a constrained optimization to seek an optimal registration. The proposed approach increases the success rate of the registration algorithm significantly. The impacts of the dynamic ranges of registration parameters on the registration outcome are studied and the effects of the order of rotation, scaling, and translation are also investigated.", "The author details a digital image registration algorithm based on the cross correlation of triple invariant image descriptors. This algorithm is applied to ocular fundus images, and its accuracy and reliability are quantified using simulated transformations, simulated noise, and a series of actual fundus images. >", "This paper concerns the spatial and intensity transformations that map one image onto another. We present a general technique that facilitates nonlinear spatial (stereotactic) normalization and image realignment. This technique minimizes the sum of squares between two images following nonlinear spatial deformations and transformations of the voxel (intensity) values. The spatial and intensity transformations are obtained simultaneously, and explicitly, using a least squares solution and a series of linearising devices. The approach is completely noninteractive (automatic), nonlinear, and noniterative. It can be applied in any number of dimensions. Various applications are considered, including the realignment of functional magnetic resonance imaging (MRI) time-series, the linear (affine) and nonlinear spatial normalization of positron emission tomography (PET) and structural MRI images, the coregistration of PET to structural MRI, and, implicitly, the conjoining of PET and MRI to obtain high resolution functional images. © 1995 Wiley-Liss, Inc." ] }
1811.12004
2902911072
In this work we adapt multi-person pose estimation architecture to use it on edge devices. We follow the bottom-up approach from OpenPose, the winner of COCO 2016 Keypoints Challenge, because of its decent quality and robustness to number of people inside the frame. With proposed network design and optimized post-processing code the full solution runs at 28 frames per second (fps) on Intel @math NUC 6i7KYB mini PC and 26 fps on Core @math i7-6850K CPU. The network model has 4.1M parameters and 9 billions floating-point operations (GFLOPs) complexity, which is just 15 of the baseline 2-stage OpenPose with almost the same quality. The code and model are available as a part of Intel @math OpenVINO @math Toolkit.
In @cite_6 authors proposed the fastest method to date with state-of-the-art quality among bottom-up methods, which runs 23 fps on a single GTX 1080 Ti graphic card for an image with 3 persons. They note, that performance will degrade to 15 fps for image with 20 persons. We based our work on the popular bottom-up method OpenPose, it has almost invariant to number of people inference time.
{ "cite_N": [ "@cite_6" ], "mid": [ "2819476901" ], "abstract": [ "In this paper, we present MultiPoseNet, a novel bottom-up multi-person pose estimation architecture that combines a multi-task model with a novel assignment method. MultiPoseNet can jointly handle person detection, person segmentation and pose estimation problems. The novel assignment method is implemented by the Pose Residual Network (PRN) which receives keypoint and person detections, and produces accurate poses by assigning keypoints to person instances. On the COCO keypoints dataset, our pose estimation method outperforms all previous bottom-up methods both in accuracy (+4-point mAP over previous best result) and speed; it also performs on par with the best top-down methods while being at least 4x faster. Our method is the fastest real time system with ( 23 ) frames sec." ] }
1811.11880
2951584298
Deep learning is rapidly becoming a go-to tool for many artificial intelligence problems due to its ability to outperform other approaches and even humans at many problems. Despite its popularity we are still unable to accurately predict the time it will take to train a deep learning network to solve a given problem. This training time can be seen as the product of the training time per epoch and the number of epochs which need to be performed to reach the desired level of accuracy. Some work has been carried out to predict the training time for an epoch -- most have been based around the assumption that the training time is linearly related to the number of floating point operations required. However, this relationship is not true and becomes exacerbated in cases where other activities start to dominate the execution time. Such as the time to load data from memory or loss of performance due to non-optimal parallel execution. In this work we propose an alternative approach in which we train a deep learning network to predict the execution time for parts of a deep learning network. Timings for these individual parts can then be combined to provide a prediction for the whole execution time. This has advantages over linear approaches as it can model more complex scenarios. But, also, it has the ability to predict execution times for scenarios unseen in the training data. Therefore, our approach can be used not only to infer the execution time for a batch, or entire epoch, but it can also support making a well-informed choice for the appropriate hardware and model.
A different approach is to generate a performance prediction from timings generated from the individual floating point operations that are executed during a training step @cite_2 . This is justified by the fact that most of the deep learning approach is based around linear algebra operations using floating point mathematics, where the number of floating point operations performed can be easily computed. However, due to the lack of perfect parallelism of computations on GPUs, the fact that non-floating point operations are used and the data transfer times between the GPU and main memory, the execution time only scales approximately linearly with the number of floating point operations performed. Qi @cite_2 attempt to compensate for this through a scaling derived from observing real deep learning training, however, this still assumes an even distribution of floating point and non-floating point work across all deep learning.
{ "cite_N": [ "@cite_2" ], "mid": [ "2752512710" ], "abstract": [ "Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called Paleo. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, Paleo can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that Paleo is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet." ] }
1811.12150
2902975600
Global average pooling (GAP) allows to localize discriminative information for recognition [40]. While GAP helps the convolution neural network to attend to the most discriminative features of an object, it may suffer if that information is missing e.g. due to camera viewpoint changes. To circumvent this issue, we argue that it is advantageous to attend to the global configuration of the object by modeling spatial relations among high-level features. We propose a novel architecture for Person Re-Identification, based on a novel parameter-free spatial attention layer introducing spatial relations among the feature map activations back to the model. Our spatial attention layer consistently improves the performance over the model without it. Results on four benchmarks demonstrate a superiority of our model over the state-of-the-art achieving rank-1 accuracy of 94.7 on Market-1501, 89.0 on DukeMTMC-ReID, 74.9 on CUHK03-labeled and 69.7 on CUHK03-detected.
Zhou al @cite_3 first showed the ability of GAP to localize the most discriminative image region. Instead of GAP, WELDON @cite_38 provides a more robust and automatic activation selection strategy for pooling by selecting multiple high and low score regions from the last feature map, which is a generalization of the min + max prediction function in @cite_26 . However, they just selected top positive and negative instances with the highest and lowest activations and then aggregated them. Different from them, we use all of the activations on the last feature map while having a fixed budget for the total importance by the softmax function. Note, that both are parameter-free methods.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_3" ], "mid": [ "2438305798", "824377753", "2950328304" ], "abstract": [ "In this paper, we introduce a novel framework for WEakly supervised Learning of Deep cOnvolutional neural Networks (WELDON). Our method is dedicated to automatically selecting relevant image regions from weak annotations, e.g. global image labels, and encompasses the following contributions. Firstly, WELDON leverages recent improvements on the Multiple Instance Learning paradigm, i.e. negative evidence scoring and top instance selection. Secondly, the deep CNN is trained to optimize Average Precision, and fine-tuned on the target dataset with efficient computations due to convolutional feature sharing. A thorough experimental validation shows that WELDON outperforms state-of-the-art results on six different datasets.", "Part-based representations have been shown to be very useful for image classification. Learning part-based models is often viewed as a two-stage problem. First, a collection of informative parts is discovered, using heuristics that promote part distinctiveness and diversity, and then classifiers are trained on the vector of part responses. In this paper we unify the two stages and learn the image classifiers and a set of shared parts jointly. We generate an initial pool of parts by randomly sampling part candidates and selecting a good subset using L1 L2 regularization. All steps are driven \"directly\" by the same objective namely the classification loss on a training set. This lets us do away with engineered heuristics. We also introduce the notion of \"negative parts\", intended as parts that are negatively correlated with one or more classes. Negative parts are complementary to the parts discovered by other methods, which look only for positive correlations.", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them" ] }
1811.12150
2902975600
Global average pooling (GAP) allows to localize discriminative information for recognition [40]. While GAP helps the convolution neural network to attend to the most discriminative features of an object, it may suffer if that information is missing e.g. due to camera viewpoint changes. To circumvent this issue, we argue that it is advantageous to attend to the global configuration of the object by modeling spatial relations among high-level features. We propose a novel architecture for Person Re-Identification, based on a novel parameter-free spatial attention layer introducing spatial relations among the feature map activations back to the model. Our spatial attention layer consistently improves the performance over the model without it. Results on four benchmarks demonstrate a superiority of our model over the state-of-the-art achieving rank-1 accuracy of 94.7 on Market-1501, 89.0 on DukeMTMC-ReID, 74.9 on CUHK03-labeled and 69.7 on CUHK03-detected.
@cite_16 also adopted a similar idea for spatial pooling by introducing an extra hyper-parameter to trade off relative importance between positive and negative instances. But still, it doesn't take all activations into consideration. In our case, the importance of an activation is determined by its relative magnitude to others.
{ "cite_N": [ "@cite_16" ], "mid": [ "2738853914" ], "abstract": [ "This paper introduces WILDCAT, a deep learning method which jointly aims at aligning image regions for gaining spatial invariance and learning strongly localized features. Our model is trained using only global image labels and is devoted to three main visual recognition tasks: image classification, weakly supervised object localization and semantic segmentation. WILDCAT extends state-of-the-art Convolutional Neural Networks at three main levels: the use of Fully Convolutional Networks for maintaining spatial resolution, the explicit design in the network of local features related to different class modalities, and a new way to pool these features to provide a global image prediction required for weakly supervised training. Extensive experiments show that our model significantly outperforms state-of-the-art methods." ] }
1811.12150
2902975600
Global average pooling (GAP) allows to localize discriminative information for recognition [40]. While GAP helps the convolution neural network to attend to the most discriminative features of an object, it may suffer if that information is missing e.g. due to camera viewpoint changes. To circumvent this issue, we argue that it is advantageous to attend to the global configuration of the object by modeling spatial relations among high-level features. We propose a novel architecture for Person Re-Identification, based on a novel parameter-free spatial attention layer introducing spatial relations among the feature map activations back to the model. Our spatial attention layer consistently improves the performance over the model without it. Results on four benchmarks demonstrate a superiority of our model over the state-of-the-art achieving rank-1 accuracy of 94.7 on Market-1501, 89.0 on DukeMTMC-ReID, 74.9 on CUHK03-labeled and 69.7 on CUHK03-detected.
Attention in Person Re-ID. The attention mechanism proposed in @cite_21 has shown great success for a broad range of computer vision tasks. There also exists many attention-based approaches for Person Re-ID.
{ "cite_N": [ "@cite_21" ], "mid": [ "2133564696" ], "abstract": [ "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition." ] }
1811.12194
2902519742
We present a model for predicting electrocardiogram (ECG) abnormalities in short-duration 12-lead ECG signals which outperformed medical doctors on the 4th year of their cardiology residency. Such exams can provide a full evaluation of heart activity and have not been studied in previous end-to-end machine learning papers. Using the database of a large telehealth network, we built a novel dataset with more than 2 million ECG tracings, orders of magnitude larger than those used in previous studies. Moreover, our dataset is more realistic, as it consist of 12-lead ECGs recorded during standard in-clinics exams. Using this data, we trained a residual neural network with 9 convolutional layers to map 7 to 10 second ECG signals to 6 classes of ECG abnormalities. Future work should extend these results to cover a large range of ECG abnormalities, which could improve the accessibility of this diagnostic tool and avoid wrong diagnosis from medical doctors.
Classical ECG software, such as University of Glasgow's ECG analysis program @cite_2 , extracts the main features of the ECG signal using signal processing techniques and use them as input for classifiers. A literature review of these methods is given by @cite_3 . @cite_27 a different approach is taken, where the ECG features are learned using an unsupervised method and then used as input to a supervised learning method.
{ "cite_N": [ "@cite_27", "@cite_3", "@cite_2" ], "mid": [ "2289846183", "1576089372", "2108974680" ], "abstract": [ "In this paper, we propose a novel approach based on deep learning for active classification of electrocardiogram (ECG) signals. To this end, we learn a suitable feature representation from the raw ECG data in an unsupervised way using stacked denoising autoencoders (SDAEs) with sparsity constraint. After this feature learning phase, we add a softmax regression layer on the top of the resulting hidden representation layer yielding the so-called deep neural network (DNN). During the interaction phase, we allow the expert at each iteration to label the most relevant and uncertain ECG beats in the test record, which are then used for updating the DNN weights. As ranking criteria, the method relies on the DNN posterior probabilities to associate confidence measures such as entropy and Breaking-Ties (BT) to each test beat in the ECG record under analysis. In the experiments, we validate the method on the well-known MIT-BIH arrhythmia database as well as two other databases called INCART, and SVDB, respectively. Furthermore, we follow the recommendations of the Association for the Advancement of Medical Instrumentation (AAMI) for class labeling and results presentation. The results obtained show that the newly proposed approach provides significant accuracy improvements with less expert interaction and faster online retraining compared to state-of-the-art methods.", "Classification of electrocardiogram (ECG) signals plays an important role in diagnoses of heart diseases. An accurate ECG classification is a challenging problem. This paper presents a survey of ECG classification into arrhythmia types. Early and accurate detection of arrhythmia types is important in detecting heart diseases and choosing appropriate treatment for a patient. Different classifiers are available for ECG classification. Amongst all classifiers, artificial neural networks (ANNs) have become very popular and most widely used for ECG classification. This paper discusses the issues involved in ECG classification and presents a detailed survey of preprocessing techniques, ECG databases, feature extraction techniques, ANN based classifiers, and performance measures to address the mentioned issues. Furthermore, for each surveyed paper, our paper also presents detailed analysis of input beat selection and output of the classifiers.", "The University of Glasgow 12 15 lead ECG analysis program has been in continuous development for over 20 years. It has been adapted to meet the needs of different users and keep abreast of changes in terminology as well as new morphological features described in the literature. It is applicable to neonates as well as adults and takes account of racial variation in wave amplitudes. It has a capability for comparing serially recorded ECGs using one of two different approaches. The many varying features of the software have led to the introduction of the descriptor Uni-G (unique) ECG analysis program" ] }
1811.12139
2963709343
Abstract Compared with facial emotion estimation on categorical model, dimensional emotion estimation can describe numerous emotions more accurately. Most prior works of dimensional emotion estimation only considered laboratory data and used video, speech or other multi-modal features. Compared with other modal data, static images has superiorities of accessibility, which is more conducive to the emotion estimation in real world. In this paper, a two-level attention with two-stage multi-task learning (2Att-2Mt) framework is proposed for facial emotion estimation on only static images. Firstly, the features of corresponding region (position level features) are extracted and enhanced automatically by first-level attention mechanism. Then, we utilize Bi-directional Recurrent Neural Network (Bi-RNN) with self-attention (second-level attention) to make full use of the relationship features of different layers (layer-level features) adaptively. And then, we propose a two-stage multi-task learning structure, which exploits categorical representations to ameliorate the dimensional representations and estimate valence and arousal simultaneously in view of the inherent complexity of dimensional representations and correlation of the two targets. The quantitative results conducted on AffectNet dataset show significant advancement on Concordance Correlation Coefficient(CCC) and Root Mean Square Error (RMSE), illustrating the superiority of the proposed framework. Besides, extensive comparative experiments have also fully demonstrated the effectiveness of different components (2Att and 2Mt) in our framework.
At present, most of the aforementioned approaches focused on the discrete model. Facial emotion estimation on continuous model still is a troublesome task. There have been some competitions based on continuous models, i.e. Audio Visual Emotion Challenges(AVEC), One-Minute Gradual-Emotion Behavior Challenge (OMG-Emotion), Affect-in-the-wild Challenge(Aff-wild). In AVEC2017, the winner @cite_26 combined the features of text, acoustic and video and utilized LSTM to extract time information for the final prediction. Peng @ joint trained audio and video model incorporated Bi-LSTM and temporal pooling together, and got the first prize in OMG-Emotion. Kollias @cite_31 put the final convolution layer, pooling layer, and fully connected layer into the gated recurrent unit and fused the final results. Chang @cite_10 proposed an integrated network to extract face attribute action unit(AU) information and estimate Valence-Arousal values simultaneously and achieved the first place in Aff-wild.
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_10" ], "mid": [ "", "2765291577", "2737986725" ], "abstract": [ "", "Automatic emotion recognition is a challenging task which can make great impact on improving natural human computer interactions. In this paper, we present our effort for the Affect Subtask in the Audio Visual Emotion Challenge (AVEC) 2017, which requires participants to perform continuous emotion prediction on three affective dimensions: Arousal, Valence and Likability based on the audiovisual signals. We highlight three aspects of our solutions: 1) we explore and fuse different hand-crafted and deep learned features from all available modalities including acoustic, visual, and textual modalities, and we further consider the interlocutor influence for the acoustic features; 2) we compare the effectiveness of non-temporal model SVR and temporal model LSTM-RNN and show that the LSTM-RNN can not only alleviate the feature engineering efforts such as construction of contextual features and feature delay, but also improve the recognition performance significantly; 3) we apply multi-task learning strategy for collaborative prediction of multiple emotion dimensions with shared representations according to the fact that different emotion dimensions are correlated with each other. Our solutions achieve the CCC of 0.675, 0.756 and 0.509 on arousal, valence, and likability respectively on the challenge testing set, which outperforms the baseline system with corresponding CCC of 0.375, 0.466, and 0.246 on arousal, valence, and likability.", "Facial expression recognition has been investigated for many years, and there are two popular models: Action Units (AUs) and the Valence-Arousal space (V-A space) that have been widely used. However, most of the databases for estimating V-A intensity are captured in laboratory settings, and the benchmarks \"in-the-wild\" do not exist. Thus, the First Affect-In-The-Wild Challenge released a database for V-A estimation while the videos were captured in wild condition. In this paper, we propose an integrated deep learning framework for facial attribute recognition, AU detection, and V-A estimation. The key idea is to apply AUs to estimate the V-A intensity since both AUs and V-A space could be utilized to recognize some emotion categories. Besides, the AU detector is trained based on the convolutional neural network (CNN) for facial attribute recognition. In experiments, we will show the results of the above three tasks to verify the performances of our proposed network framework." ] }
1811.12139
2963709343
Abstract Compared with facial emotion estimation on categorical model, dimensional emotion estimation can describe numerous emotions more accurately. Most prior works of dimensional emotion estimation only considered laboratory data and used video, speech or other multi-modal features. Compared with other modal data, static images has superiorities of accessibility, which is more conducive to the emotion estimation in real world. In this paper, a two-level attention with two-stage multi-task learning (2Att-2Mt) framework is proposed for facial emotion estimation on only static images. Firstly, the features of corresponding region (position level features) are extracted and enhanced automatically by first-level attention mechanism. Then, we utilize Bi-directional Recurrent Neural Network (Bi-RNN) with self-attention (second-level attention) to make full use of the relationship features of different layers (layer-level features) adaptively. And then, we propose a two-stage multi-task learning structure, which exploits categorical representations to ameliorate the dimensional representations and estimate valence and arousal simultaneously in view of the inherent complexity of dimensional representations and correlation of the two targets. The quantitative results conducted on AffectNet dataset show significant advancement on Concordance Correlation Coefficient(CCC) and Root Mean Square Error (RMSE), illustrating the superiority of the proposed framework. Besides, extensive comparative experiments have also fully demonstrated the effectiveness of different components (2Att and 2Mt) in our framework.
The process of human perception proves the importance of the attention mechanism @cite_7 . Broadly, attention can be seen as a mechanism for allocating available processing resources to the most signal-fertile components @cite_20 . Presently, attention mechanism is extensively used in various fields, machine translation, visual question answering, image caption. In Previous researches, most of the attention mechanisms were implemented on sequence processing. Hu @cite_20 designed Squeeze-and-Excitation Network to learn the weight of feature map in line with loss, which makes the quality features can be improved, futile features can be diluted. It can be seen as engaging attention mechanisms to feature maps on channel dimension. Qin @cite_28 employed attention mechanism to generate saliency maps, which have a strong positive correlation with facial emotion. The maps can be seen as learned features. Wang @cite_23 proposed an attention model for image classification which uses an hourglass model to construct trunk and mask branch, where mask branch is a Bottom-up Top-down structure. The mask branch is able to generate soft attention weight which corresponds to each pixel of the original input. In this paper, we exploit the residual attention block proposed by Wang to extract the features as the first-level attention.
{ "cite_N": [ "@cite_28", "@cite_20", "@cite_23", "@cite_7" ], "mid": [ "2793980118", "", "2609476118", "2951527505" ], "abstract": [ "Abstract In this paper, an eleven-layered Convolutional Neural Network with Visual Attention is proposed for facial expression recognition. The network is composed of three components. First, local convolutional features of faces are extracted by a stack of ten convolutional layers. Second, the regions of interest are automatically determined according to these local features by the embedded attention model. Third, the local features in these regions are aggregated and used to infer the emotional label. These three components are integrated into a single network which can be trained in an end-to-end scheme. Extensive experiments on four kinds of data (namely aligned frontal faces, faces in different poses, aligned unconstrained faces, and grouped unconstrained faces) prove that the proposed method can improve the accuracy and obtain good visualization. The visualization shows that the learned regions of interest are partly consistent with the locations of emotion specific Action Units. This founding confirms the interpretation of Facial Action Coding System and Emotional Facial Action Coding System from a machine learning perspective.", "", "In this work, we propose \"Residual Attention Network\", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so." ] }
1811.11814
2902214234
Integrating multi-phase information is an effective way of boosting visual recognition. In this paper, we investigate this problem from the perspective of medical imaging analysis, in which two phases in CT scans known as arterial and venous are combined towards higher segmentation accuracy. To this end, we propose Phase Collaborative Network (PCN), an end-to-end network which contains both generative and discriminative modules to formulate phase-to-phase relations and data-to-label relations, respectively. Experiments are performed on several CT image segmentation datasets. PCN achieves superior performance with either two phases or only one phase available. Moreover, we empirically verify that the accuracy gain comes from the collaboration between phases.
is a critical problem in computer vision. Conventional methods refer to the graph-based methods @cite_25 and handcrafted local features @cite_29 , have trend to be replaced by techniques from deep learning, which are typically deep neural networks that can produce higher segmentation accuracy @cite_35 @cite_6 . As various deep network architectures have been proposed @cite_7 @cite_0 @cite_14 @cite_8 , the segmentation networks have become more robust and thus been applied to more tasks like video-based segmentation, instance segmentation @cite_42 @cite_41 @cite_3 @cite_9 @cite_19 and more types of data like 3D data @cite_46 @cite_1 , As the segmentation networks can be extended to more and more tasks, researchers also attempt to apply segmentation networks in medical imaging analysis, where the medical images differ from natural images in that data appear in a volumetric form.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_7", "@cite_8", "@cite_41", "@cite_29", "@cite_42", "@cite_9", "@cite_1", "@cite_6", "@cite_3", "@cite_0", "@cite_19", "@cite_46", "@cite_25" ], "mid": [ "2952632681", "1686810756", "", "2949650786", "2950612966", "91894041", "", "2963866581", "2560609797", "", "2951277909", "", "2963391479", "2609719703", "1584247442" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.", "Label propagation has been shown to be effective in many automatic segmentation applications. However, its reliance on accurate image alignment means that segmentation results can be affected by any registration errors which occur. Patch-based methods relax this dependence by avoiding explicit one-to-one correspondence assumptions between images but are still limited by the search window size. Too small, and it does not account for enough registration error; too big, and it becomes more likely to select incorrect patches of similar appearance for label fusion. This paper presents a novel patch-based label propagation approach which uses relative geodesic distances to define patient-specific coordinate systems as spatial context to overcome this problem. The approach is evaluated on multi-organ segmentation of 20 cardiac MR images and 100 abdominal CT images, demonstrating competitive results.", "", "Recent years have seen tremendous progress in still-image segmentation; however the naive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video. We propose a video recognition framework that relies on two key observations: (1) while pixels may change rapidly from frame to frame, the semantic content of a scene evolves more slowly, and (2) execution can be viewed as an aspect of architecture, yielding purpose-fit computation schedules for networks. We define a novel family of “clockwork” convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability. We design a pipeline schedule to reduce latency for real-time recognition and a fixed-rate schedule to reduce overall computation. Finally, we extend clockwork scheduling to adaptive video processing by incorporating data-driven clocks that can be tuned on unlabeled video. The accuracy and efficiency of clockwork convnets are evaluated on the Youtube-Objects, NYUD, and Cityscapes video datasets.", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "", "Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.", "", "Over the last few years deep learning methods have emerged as one of the most prominent approaches for video analysis. However, so far their most successful applications have been in the area of video classification and detection, i.e., problems involving the prediction of a single class label or a handful of output variables per video. Furthermore, while deep networks are commonly recognized as the best models to use in these domains, there is a widespread perception that in order to yield successful results they often require time-consuming architecture search, manual tweaking of parameters and computationally intensive preprocessing or post-processing methods. In this paper we challenge these views by presenting a deep 3D convolutional architecture trained end to end to perform voxel-level prediction, i.e., to output a variable at every voxel of the video. Most importantly, we show that the same exact architecture can be used to achieve competitive results on three widely different voxel-prediction tasks: video semantic segmentation, optical flow estimation, and video coloring. The three networks learned on these problems are trained from raw video without any form of preprocessing and their outputs do not require post-processing to achieve outstanding performance. Thus, they offer an efficient alternative to traditional and much more computationally expensive methods in these video domains.", "In this paper, we tackle the labeling problem for 3D point clouds. We introduce a 3D point cloud labeling scheme based on 3D Convolutional Neural Network. Our approach minimizes the prior knowledge of the labeling problem and does not require a segmentation step or hand-crafted features as most previous approaches did. Particularly, we present solutions for large data handling during the training and testing process. Experiments performed on the urban point cloud dataset containing 7 categories of objects show the robustness of our approach.", "We propose a novel kidney segmentation approach based on the graph cuts technique. The proposed approach depends on both image appearance and shape information. Shape information is gathered from a set of training shapes. Then we estimate the shape variations using a new distance probabilistic model which approximates the marginal densities of the kidney and its background in the variability region using a Poisson distribution refined by positive and negative Gaussian components. To segment a kidney slice, we align it with the training slices so we can use the distance probabilistic model. Then its gray level is approximated with a LCG with sign-alternate components. The spatial interaction between the neighboring pixels is identified using a new analytical approach. Finally, we formulate a new energy function using both image appearance models and shape constraints. This function is globally minimized using s t graph cuts to get the optimal segmentation. Experimental results show that the proposed technique gives promising results compared to others without shape constraints." ] }
1811.11615
2903261989
While autonomous navigation has recently gained great interest in the field of reinforcement learning, only a few works in this field have focused on the time optimal velocity control problem, i.e. controlling a vehicle such that it travels at the maximal speed without becoming dynamically unstable. Achieving maximal speed is important in many situations, such as emergency vehicles traveling at high speeds to their destinations, and regular vehicles executing emergency maneuvers to avoid imminent collisions. Time optimal velocity control can be solved numerically using existing methods that are based on optimal control and vehicle dynamics. In this paper, we use deep reinforcement learning to generate the time optimal velocity control. Furthermore, we use the numerical solution to further improve the performance of the reinforcement learner. It is shown that the reinforcement learner outperforms the numerically derived solution, and that the hybrid approach (combining learning with the numerical solution) speeds up the learning process.
There are several approaches for velocity control using reinforcement learning. Each of them solves a problem defined differently. For example, @cite_8 developed a self-supervised'' method for velocity selection in off-road terrain using the input of an inertial measurement unit. The goal is to protect the vehicle from too high vibrations while achieving a higher velocity but not the optimal one. Rosolia and Borrelli present a model predictive controller that improves its performance based on previous iterations, and test it on a racing car. This method is effective for repetitive tasks, in which the initial state is the same at each iteration; hence it cannot control a vehicle on different paths. Other approaches use machine learning based methods as a preprocessing stage only. use a convolutional neural network to create the input for a predictive controller to achieve aggressive driving capabilities. While a convolutional neural network training is used as a preprocessing stage to create a cost map, the vehicle itself is controlled by a model predictive controller.
{ "cite_N": [ "@cite_8" ], "mid": [ "1914316289" ], "abstract": [ "We present a machine learning approach for estimating the second derivative of a drivable surface, its roughness. Robot perception generally focuses on the first derivative, obstacle detection. However, the second derivative is also important due to its direct relation (with speed) to the shock the vehicle experiences. Knowing the second derivative allows a vehicle to slow down in advance of rough terrain. Estimating the second derivative is challenging due to uncertainty. For example, at range, laser readings may be so sparse that significant information about the surface is missing. Also, a high degree of precision is required in projecting laser readings. This precision may be unavailable due to latency or error in the pose estimation. We model these sources of error as a multivariate polynomial. Its coefficients are learned using the shock data as ground truth -- the accelerometers are used to train the lasers. The resulting classifier operates on individual laser readings from a road surface described by a 3D point cloud. The classifier identifies sections of road where the second derivative is likely to be large. Thus, the vehicle can slow down in advance, reducing the shock it experiences. The algorithm is an evolution of one we used in the 2005 DARPA Grand Challenge. We analyze it using data from that route." ] }
1811.11615
2903261989
While autonomous navigation has recently gained great interest in the field of reinforcement learning, only a few works in this field have focused on the time optimal velocity control problem, i.e. controlling a vehicle such that it travels at the maximal speed without becoming dynamically unstable. Achieving maximal speed is important in many situations, such as emergency vehicles traveling at high speeds to their destinations, and regular vehicles executing emergency maneuvers to avoid imminent collisions. Time optimal velocity control can be solved numerically using existing methods that are based on optimal control and vehicle dynamics. In this paper, we use deep reinforcement learning to generate the time optimal velocity control. Furthermore, we use the numerical solution to further improve the performance of the reinforcement learner. It is shown that the reinforcement learner outperforms the numerically derived solution, and that the hybrid approach (combining learning with the numerical solution) speeds up the learning process.
There exist many methods to use the additional information related to a problem that is trying to be solved. Imitation learning uses external demonstrations to teach an agent to perform a given task. For example, in @cite_12 an agent is trained by human demonstrations to control the steering of a vehicle. A s use imitation learning to control a vehicle using low-cost sensors. Lef e learn driving style from drivers demonstration. Imitation learning itself cannot outperform the demonstrator's performance but it is possible to start the learning process by initializing the policy by imitation learning and then further improve the policy by reinforcement learning @cite_0 @cite_14 . These methods in fact subdivide the learning process into two different phases, and do not actually use the demonstrations to improve the reinforcement learning process itself. The demonstrator for imitation learning is usually not available during the entire training process, but when another controller exist (assuming no high computation cost), it is possible to integrate it in an inseparable form, which allows other approaches as the proposed methods in this paper.
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_12" ], "mid": [ "2607198029", "2257979135", "2342840547" ], "abstract": [ "", "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.", "We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS)." ] }
1811.11823
2902651012
Detecting semantic parts of an object is a challenging task in computer vision, particularly because it is hard to construct large annotated datasets due to the difficulty of annotating semantic parts. In this paper we present an approach which learns from a small training dataset of annotated semantic parts, where the object is seen from a limited range of viewpoints, but generalizes to detect semantic parts from a much larger range of viewpoints. Our approach is based on a matching algorithm for finding accurate spatial correspondence between two images, which enables semantic parts annotated on one image to be transplanted to another. In particular, this enables images in the training dataset to be matched to a virtual 3D model of the object (for simplicity, we assume that the object viewpoint can be estimated by standard techniques). Then a clustering algorithm is used to annotate the semantic parts of the 3D virtual model. This virtual 3D model can be used to synthesize annotated images from a large range of viewpoint. These can be matched to images in the test set, using the same matching algorithm, to detect semantic parts in novel viewpoints of the object. Our algorithm is very simple, intuitive, and contains very few parameters. We evaluate our approach in the car subclass of the VehicleSemanticPart dataset. We show it outperforms standard deep network approaches and, in particular, performs much better on novel viewpoints.
In the past years, deep learning @cite_7 has advanced the research and applications of computer vision to a higher level. With the availability of large-scale image datasets @cite_15 as well as powerful computational device, researchers designed very deep neural networks @cite_1 @cite_16 @cite_6 to accomplish complicated vision tasks. The fundamental idea of deep learning is to organize neurons (the basic units that perform specified mathematical functions) in a hierarchical manner, and tune the parameters by fitting a dataset. Based on some learning algorithms to alleviate numerical stability issues @cite_11 @cite_51 @cite_52 , researchers developed deep learning in two major directions, namely, increasing the depth of the network towards higher recognition accuracy @cite_8 @cite_50 @cite_17 , and transferring the pre-trained models to various tasks, including feature extraction @cite_24 @cite_9 , object detection @cite_30 @cite_40 @cite_3 , semantic segmentation @cite_42 @cite_49 , pose estimation @cite_47 , boundary detection @cite_33 , etc .
{ "cite_N": [ "@cite_30", "@cite_42", "@cite_3", "@cite_15", "@cite_8", "@cite_52", "@cite_49", "@cite_17", "@cite_7", "@cite_6", "@cite_40", "@cite_50", "@cite_16", "@cite_33", "@cite_9", "@cite_1", "@cite_24", "@cite_47", "@cite_51", "@cite_11" ], "mid": [ "2102605133", "1903029394", "2613718673", "2108598243", "2194775991", "2949117887", "2412782625", "", "", "2097117768", "", "2963446712", "1686810756", "", "2062118960", "2163605009", "2155541015", "2307770531", "2095705004", "1665214252" ], "abstract": [ "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "", "", "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be repurposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Restricted Boltzmann machines were developed using binary stochastic hidden units. These can be generalized by replacing each binary unit by an infinite number of copies that all have the same weights but have progressively more negative biases. The learning and inference rules for these \"Stepped Sigmoid Units\" are unchanged. They can be approximated efficiently by noisy, rectified linear units. Compared with binary units, these units learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset. Unlike binary units, rectified linear units preserve information about relative intensities as information travels through multiple layers of feature detectors." ] }
1811.11823
2902651012
Detecting semantic parts of an object is a challenging task in computer vision, particularly because it is hard to construct large annotated datasets due to the difficulty of annotating semantic parts. In this paper we present an approach which learns from a small training dataset of annotated semantic parts, where the object is seen from a limited range of viewpoints, but generalizes to detect semantic parts from a much larger range of viewpoints. Our approach is based on a matching algorithm for finding accurate spatial correspondence between two images, which enables semantic parts annotated on one image to be transplanted to another. In particular, this enables images in the training dataset to be matched to a virtual 3D model of the object (for simplicity, we assume that the object viewpoint can be estimated by standard techniques). Then a clustering algorithm is used to annotate the semantic parts of the 3D virtual model. This virtual 3D model can be used to synthesize annotated images from a large range of viewpoint. These can be matched to images in the test set, using the same matching algorithm, to detect semantic parts in novel viewpoints of the object. Our algorithm is very simple, intuitive, and contains very few parameters. We evaluate our approach in the car subclass of the VehicleSemanticPart dataset. We show it outperforms standard deep network approaches and, in particular, performs much better on novel viewpoints.
For object detection, the most popular pipeline, in the context of deep learning, involves first extracting a number of bounding-boxes named regional proposals @cite_0 @cite_23 @cite_3 , and then determining if each of them belongs to the target class @cite_30 @cite_40 @cite_3 @cite_14 @cite_38 @cite_4 . To improve spatial accuracy, the techniques of bounding-box regression @cite_26 and non-maximum suppression @cite_58 were widely used for post-processing. Boosted by high-quality visual features and end-to-end optimization, this framework significantly outperforms the conventional deformable part-based model @cite_13 which were trained on top of handcrafted features @cite_39 . Despite the success of this framework, it still suffers from weak explainability, as both object proposal extraction and classification modules were black-boxes, and thus easily confused by occlusion @cite_27 and adversarial attacks @cite_12 . There were also research efforts of using mid-level or high-level contextual cues to detect objects @cite_29 or semantic parts @cite_27 . These methods, while being limited on rigid objects such as vehicles, often benefit from better transferability and work reasonably well on partially occluded data @cite_27 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_14", "@cite_4", "@cite_26", "@cite_29", "@cite_3", "@cite_39", "@cite_0", "@cite_40", "@cite_27", "@cite_23", "@cite_58", "@cite_13", "@cite_12" ], "mid": [ "2102605133", "2193145675", "2407521645", "2963037989", "2886904239", "2552264258", "2613718673", "2161969291", "2066624635", "", "2755542034", "2088049833", "", "2168356304", "2604505099" ], "abstract": [ "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN [7, 19] that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets) [10], for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: https: github.com daijifeng001 r-fcn.", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. While the probabilities for class labels naturally reflect classification confidence, localization confidence is absent. This makes properly localized bounding boxes degenerate during iterative regression or even suppressed during NMS. In the paper we propose IoU-Net learning to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes. Furthermore, an optimization-based bounding box refinement method is proposed, where the predicted IoU is formulated as the objective. Extensive experiments on the MS-COCO dataset show the effectiveness of IoU-Net, as well as its compatibility with and adaptivity to several state-of-the-art object detectors.", "We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal representations of CNNs. Our technique is based on the hypothesis that semantic parts are represented by populations of neurons rather than by single filters. We propose a clustering technique to extract part representations, which we call Visual Concepts. We show that visual concepts are semantically coherent in that they represent semantic parts, and visually coherent in that corresponding image patches appear very similar. Also, visual concepts provide full spatial coverage of the parts of an object, rather than a few sparse parts as is typically found in keypoint annotations. Furthermore, We treat single visual concept as part detector and evaluate it for keypoint detection using the PASCAL3D+ dataset and for part detection using our newly annotated ImageNetPart dataset. The experiments demonstrate that visual concepts can be used to detect parts. We also show that some visual concepts respond to several semantic parts, provided these parts are visually similar. Thus visual concepts have the essential properties: semantic meaning and detection capability. Note that our ImageNetPart dataset gives rich part annotations which cover the whole object, making it useful for other part-related applications.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "", "In this paper, we study the task of detecting semantic parts of an object. This is very important in computer vision, as it provides the possibility to parse an object as human do, and helps us better understand object detection algorithms. Also, detecting semantic parts is very challenging especially when the parts are partially or fully occluded. In this scenario, the popular proposal-based methods like Faster-RCNN often produce unsatisfactory results, because both the proposal extraction and classification stages may be confused by the irrelevant occluders. To this end, we propose a novel detection framework, named DeepVoting, which accumulates local visual cues, called visual concepts (VC), to locate the semantic parts. Our approach involves adding two layers after the intermediate outputs of a deep neural network. The first layer is used to extract VC responses, and the second layer performs a voting mechanism to capture the spatial relationship between VC's and semantic parts. The benefit is that each semantic part is supported by multiple VC's. Even if some of the supporting VC's are missing due to occlusion, we can still infer the presence of the target semantic part using the remaining ones. To avoid generating an exponentially large training set to cover all occlusion cases, we train our model without seeing occlusion and transfer the learned knowledge to deal with occlusions. This setting favors learning the models which are naturally robust and adaptive to occlusions instead of over-fitting the occlusion patterns in the training data. In experiments, DeepVoting shows significantly better performance on semantic part detection in occlusion scenarios, compared with Faster-RCNN, with one order of magnitude fewer parameters and 2.5x testing speed. In addition, DeepVoting is explainable as the detection result can be diagnosed via looking up the voted VC's.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, cause deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the target is a pixel or a receptive field in segmentation, and an object proposal in detection). This inspires us to optimize a loss function over a set of targets for generating adversarial perturbations. Based on this, we propose a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection. We find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transfer ability across networks with the same architecture is more significant than in other cases. Besides, we show that summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack." ] }
1811.11823
2902651012
Detecting semantic parts of an object is a challenging task in computer vision, particularly because it is hard to construct large annotated datasets due to the difficulty of annotating semantic parts. In this paper we present an approach which learns from a small training dataset of annotated semantic parts, where the object is seen from a limited range of viewpoints, but generalizes to detect semantic parts from a much larger range of viewpoints. Our approach is based on a matching algorithm for finding accurate spatial correspondence between two images, which enables semantic parts annotated on one image to be transplanted to another. In particular, this enables images in the training dataset to be matched to a virtual 3D model of the object (for simplicity, we assume that the object viewpoint can be estimated by standard techniques). Then a clustering algorithm is used to annotate the semantic parts of the 3D virtual model. This virtual 3D model can be used to synthesize annotated images from a large range of viewpoint. These can be matched to images in the test set, using the same matching algorithm, to detect semantic parts in novel viewpoints of the object. Our algorithm is very simple, intuitive, and contains very few parameters. We evaluate our approach in the car subclass of the VehicleSemanticPart dataset. We show it outperforms standard deep network approaches and, in particular, performs much better on novel viewpoints.
Another way of visual recognition is to find correspondence between features or images, so that annotations from one (training) image can be transplanted to another (testing) image @cite_25 @cite_53 @cite_46 @cite_37 @cite_59 . This topic was noticed in the early age of vision @cite_54 and later built upon handcrafted features @cite_20 @cite_56 @cite_22 . There were efforts in introducing semantic information into matching @cite_43 , and also improving the robustness against noise @cite_57 . Recently, deep learning has brought a significant boost to these problems by improving both features @cite_9 @cite_48 and matching algorithms @cite_21 @cite_19 @cite_36 @cite_10 , while a critical part of these frameworks still lies in end-to-end optimizing deep networks.
{ "cite_N": [ "@cite_37", "@cite_22", "@cite_36", "@cite_48", "@cite_53", "@cite_54", "@cite_9", "@cite_21", "@cite_56", "@cite_57", "@cite_43", "@cite_19", "@cite_59", "@cite_46", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2962981304", "2134292164", "2474531669", "1899185266", "2593948489", "2561377267", "2062118960", "764651262", "2148534289", "2157656099", "2090518410", "", "2747550417", "2606149788", "2560474170", "2963325280", "2124404372" ], "abstract": [ "Learning automatically the structure of object categories remains an important open problem in computer vision. In this paper, we propose a novel unsupervised approach that can discover and learn landmarks in object categories, thus characterizing their structure. Our approach is based on factorizing image deformations, as induced by a viewpoint change or an object deformation, by learning a deep neural network that detects landmarks consistently with such visual effects. Furthermore, we show that the learned landmarks establish meaningful correspondences between different object instances in a category without having to impose this requirement explicitly. We assess the method qualitatively on a variety of object types, natural and man-made. We also show that our unsupervised landmarks are highly predictive of manually-annotated landmarks in face benchmark datasets, and can be used to regress these with a high degree of accuracy.", "Establishing dense correspondences reliably between a pair of images is an important vision task with many applications. Though significant advance has been made towards estimating dense stereo and optical flow fields for two images adjacent in viewpoint or in time, building reliable dense correspondence fields for two general images still remains largely unsolved. For instance, two given images sharing some content exhibit dramatic photometric and geometric variations, or they depict different 3D scenes of similar scene characteristics. Fundamental challenges to such an image or scene alignment task are often multifold, which render many existing techniques fall short of producing dense correspondences robustly and efficiently. This paper presents a novel approach called DAISY filter flow (DFF) to address this challenging task. Inspired by the recent PatchMatch Filter technique, we leverage and extend a few established methods: 1) DAISY descriptors, 2) filter-based efficient flow inference, and 3) the PatchMatch fast search. Coupling and optimizing these modules seamlessly with image segments as the bridge, the proposed DFF approach enables efficiently performing dense descriptor-based correspondence field estimation in a generalized high-dimensional label space, which is augmented by scales and rotations. Experiments on a variety of challenging scenes show that our DFF approach estimates spatially coherent yet discontinuity-preserving image alignment results both robustly and efficiently.", "Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and realto-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms stateof-the-art pairwise matching methods in correspondencerelated tasks.", "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.", "We present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. To robustly match points among different instances within the same object class, we formulate FCSS using local self-similarity (LSS) within a fully convolutional network. In contrast to existing CNN-based descriptors, FCSS is inherently insensitive to intra-class appearance variations because of its LSS-based structure, while maintaining the precise localization ability of deep neural networks. The sampling patterns of local structure and the self-similarity measure are jointly learned within the proposed network in an end-to-end and multi-scale manner. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in existing image datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks.", "A stereo matching method that uses multiple stereo pairs with various baselines generated by a lateral displacement of a camera to obtain precise distance estimates without suffering from ambiguity is presented. Matching is performed simply by computing the sum of squared-difference (SSD) values. The SSD functions for individual stereo pairs are represented with respect to the inverse distance and are then added to produce the sum of SSDs. This resulting function is called the SSSD-in-inverse-distance. It is shown that the SSSD-in-inverse-distance function exhibits a unique and clear minimum at the correct matching position, even when the underlying intensity patterns of the scene include ambiguities or repetitive patterns. The authors first define a stereo algorithm based on the SSSD-in-inverse-distance and present a mathematical analysis to show how the algorithm can remove ambiguity and increase precision. Experimental results with real stereo images are presented to demonstrate the effectiveness of the algorithm. >", "Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.", "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.", "Many computer vision tasks can be formulated as labeling problems. The desired solution is often a spatially smooth labeling where label transitions are aligned with color edges of the input image. We show that such solutions can be efficiently achieved by smoothing the label costs with a very fast edge-preserving filter. In this paper, we propose a generic and simple framework comprising three steps: 1) constructing a cost volume, 2) fast cost volume filtering, and 3) Winner-Takes-All label selection. Our main contribution is to show that with such a simple framework state-of-the-art results can be achieved for several computer vision applications. In particular, we achieve 1) disparity maps in real time whose quality exceeds those of all other fast (local) approaches on the Middlebury stereo benchmark, and 2) optical flow fields which contain very fine structures as well as large displacements. To demonstrate robustness, the few parameters of our framework are set to nearly identical values for both applications. Also, competitive results for interactive image segmentation are presented. With this work, we hope to inspire other researchers to leverage this framework to other application areas.", "We introduce a new transformation estimation algorithm using the @math estimator and apply it to non-rigid registration for building robust sparse and dense correspondences. In the sparse point case, our method iteratively recovers the point correspondence and estimates the transformation between two point sets. Feature descriptors such as shape context are used to establish rough correspondence. We then estimate the transformation using our robust algorithm. This enables us to deal with the noise and outliers which arise in the correspondence step. The transformation is specified in a functional space, more specifically a reproducing kernel Hilbert space. In the dense point case for nonrigid image registration, our approach consists of matching both sparsely and densely sampled SIFT features, and it has particular advantages in handling significant scale changes and rotations. The experimental results show that our approach greatly outperforms state-of-the-art methods, particularly when the data contains severe outliers.", "While image alignment has been studied in different areas of computer vision for decades, aligning images depicting different scenes remains a challenging problem. Analogous to optical flow, where an image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its nearest neighbors in a large image corpus containing a variety of scenes. The SIFT flow algorithm consists of matching densely sampled, pixelwise SIFT features between two images while preserving spatial discontinuities. The SIFT features allow robust matching across different scene object appearances, whereas the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach robustly aligns complex scene pairs containing significant spatial differences. Based on SIFT flow, we propose an alignment-based large database framework for image analysis and synthesis, where image information is transferred from the nearest neighbors to a query image according to the dense scene correspondence. This framework is demonstrated through concrete applications such as motion field prediction from a single image, motion synthesis via object transfer, satellite image registration, and face recognition.", "", "Estimating dense visual correspondences between objects with intra-class variation, deformations and background clutter remains a challenging problem. Thanks to the breakthrough of CNNs there are new powerful features available. Despite their easy accessibility and great success, existing semantic flow methods could not significantly benefit from these without extensive additional training. We introduce a novel method for semantic matching with pre-trained CNN features which is based on convolutional feature pyramids and activation guided feature selection. For the final matching we propose a sparse graph matching framework where each salient feature selects among a small subset of nearest neighbors in the target image. To improve our method in the unconstrained setting without bounding box annotations we introduce novel object proposal based matching constraints. Furthermore, we show that the sparse matching can be transformed into a dense correspondence field. Extensive experimental evaluations on benchmark datasets show that our method significantly outperforms existing semantic matching methods.", "Despite significant progress of deep learning in recent years, state-of-the-art semantic matching methods still rely on legacy features such as SIFT or HoG. We argue that the strong invariance properties that are key to the success of recent deep architectures on the classification task make them unfit for dense correspondence tasks, unless a large amount of supervision is used. In this work, we propose a deep network, termed AnchorNet, that produces image representations that are well-suited for semantic matching. It relies on a set of filters whose response is geometrically consistent across different object instances, even in the presence of strong intra-class, scale, or viewpoint variations. Trained only with weak image-level labels, the final representation successfully captures information about the object structure and improves results of state-of-the-art semantic matching methods such as the deformable spatial pyramid or the proposal flow methods. We show positive results on the cross-instance matching task where different instances of the same object category are matched as well as on a new cross-category semantic matching task aligning pairs of instances each from a different object class.", "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.", "This paper addresses the problem of establishing semantic correspondences between images depicting different instances of the same object or scene category. Previous approaches focus on either combining a spatial regularizer with hand-crafted features, or learning a correspondence model for appearance only. We propose instead a convolutional neural network architecture, called SCNet, for learning a geometrically plausible model for semantic correspondence. SCNet uses region proposals as matching primitives, and explicitly incorporates geometric consistency in its loss function. It is trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and a comparative evaluation on several standard benchmarks demonstrates that the proposed approach substantially outperforms both recent deep learning architectures and previous methods based on hand-crafted features.", "Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained." ] }
1811.11823
2902651012
Detecting semantic parts of an object is a challenging task in computer vision, particularly because it is hard to construct large annotated datasets due to the difficulty of annotating semantic parts. In this paper we present an approach which learns from a small training dataset of annotated semantic parts, where the object is seen from a limited range of viewpoints, but generalizes to detect semantic parts from a much larger range of viewpoints. Our approach is based on a matching algorithm for finding accurate spatial correspondence between two images, which enables semantic parts annotated on one image to be transplanted to another. In particular, this enables images in the training dataset to be matched to a virtual 3D model of the object (for simplicity, we assume that the object viewpoint can be estimated by standard techniques). Then a clustering algorithm is used to annotate the semantic parts of the 3D virtual model. This virtual 3D model can be used to synthesize annotated images from a large range of viewpoint. These can be matched to images in the test set, using the same matching algorithm, to detect semantic parts in novel viewpoints of the object. Our algorithm is very simple, intuitive, and contains very few parameters. We evaluate our approach in the car subclass of the VehicleSemanticPart dataset. We show it outperforms standard deep network approaches and, in particular, performs much better on novel viewpoints.
Training a vision system requires a large amount of data. To alleviate this issue, researchers sought for help from the virtual world, mainly because annotating virtual data is often easy and cheap @cite_35 . Another solution is to perform unsupervised or weakly-supervised training with consistency that naturally exists @cite_36 @cite_44 @cite_45 . This paper investigates both of these possibilities.
{ "cite_N": [ "@cite_36", "@cite_35", "@cite_44", "@cite_45" ], "mid": [ "2474531669", "2963826402", "2520707372", "2962793481" ], "abstract": [ "Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and realto-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms stateof-the-art pairwise matching methods in correspondencerelated tasks.", "Computer graphics can not only generate synthetic images and ground truth but it also offers the possibility of constructing virtual worlds in which: (i) an agent can perceive, navigate, and take actions guided by AI algorithms, (ii) properties of the worlds can be modified (e.g., material and reflectance), (iii) physical simulations can be performed, and (iv) algorithms can be learnt and evaluated. But creating realistic virtual worlds is not easy. The game industry, however, has spent a lot of effort creating 3D worlds, which a player can interact with. So researchers can build on these resources to create virtual worlds, provided we can access and modify the internal data structures of the games. To enable this we created an open-source plugin UnrealCV (Project website: http: unrealcv.github.io) for a popular game engine Unreal Engine 4 (UE4). We show two applications: (i) a proof of concept image dataset, and (ii) linking Caffe with the virtual world to test deep network algorithms.", "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach." ] }
1811.11662
2903411490
Recent anchor-based deep face detectors have achieved promising performance, but they are still struggling to detect hard faces, such as small, blurred and partially occluded faces. A reason is that they treat all images and faces equally, without putting more effort on hard ones; however, many training images only contain easy faces, which are less helpful to achieve better performance on hard images. In this paper, we propose that the robustness of a face detector against hard faces can be improved by learning small faces on hard images. Our intuitions are (1) hard images are the images which contain at least one hard face, thus they facilitate training robust face detectors; (2) most hard faces are small faces and other types of hard faces can be easily converted to small faces by shrinking. We build an anchor-based deep face detector, which only output a single feature map with small anchors, to specifically learn small faces and train it by a novel hard image mining strategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal Faces, and AFW datasets to show the effectiveness of our method. Our method achieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val dataset respectively, which surpass the previous state-of-the-arts, especially on the hard subset. Code and model are available at this https URL.
Face detection has received extensive research attention @cite_6 @cite_40 @cite_24 . With the emergence of modern CNN @cite_11 @cite_0 @cite_17 and object detector @cite_14 @cite_33 @cite_7 @cite_8 @cite_1 , there are many face detectors proposed to achieve promising performances @cite_23 @cite_43 @cite_29 @cite_35 @cite_36 @cite_44 , by adapting general object detection framework into face detection domain. We briefly review hard example mining, face detection architecture, and anchor design & matching.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_33", "@cite_7", "@cite_8", "@cite_36", "@cite_29", "@cite_1", "@cite_17", "@cite_6", "@cite_24", "@cite_0", "@cite_40", "@cite_43", "@cite_44", "@cite_23", "@cite_11" ], "mid": [ "2769576731", "2950800384", "2193145675", "", "2953106684", "", "2790025297", "2964010755", "1686810756", "1966822758", "2137401668", "", "", "", "2750317406", "2747648373", "2949650786" ], "abstract": [ "The performance of face detection has been largely improved with the development of convolutional neural network. However, the occlusion issue due to mask and sunglasses, is still a challenging problem. The improvement on the recall of these occluded cases usually brings the risk of high false positives. In this paper, we present a novel face detector called Face Attention Network (FAN), which can significantly improve the recall of the face detection problem in the occluded case without compromising the speed. More specifically, we propose a new anchor-level attention, which will highlight the features from the face region. Integrated with our anchor assign strategy and data augmentation techniques, we obtain state-of-art results on public face detection benchmarks like WiderFace and MAFA. The code will be released for reproduction.", "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "", "Face detection has been well studied for many years and one of the remaining challenges is to detect small, blurred and partially occluded faces in uncontrolled environment. This paper proposes a novel context-assisted single shot face detector, named PyramidBox, to handle the hard face detection problem. Observing the importance of the context, we improve the utilization of contextual information in the following three aspects. First, we design a novel contextual anchor to supervise high-level contextual feature learning by a semi-supervised method, which we call it PyramidAnchors. Second, we propose the Low-level Feature Pyramid Network to combine adequate high-level contextual semantic feature and Low-level facial feature together, which also allows the PyramidBox to predict faces of all scales in a single shot. Third, we introduce a context-sensitive structure to increase the capacity of prediction network to improve the final accuracy of output. In addition, we use the method of Data-anchor-sampling to augment the training samples across different scales, which increases the diversities of training data for smaller faces. By exploiting the value of context, PyramidBox achieves superior performance among the state-of-the-art on the two common face detection benchmarks, FDDB and WIDER FACE.", "We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic segmentation branch and a global activation module. The segmentation branch is supervised by weak segmentation ground-truth, i.e., no extra annotation is required. In conjunction with that, we employ a global activation module which learns relationship between channels and object classes in a self-supervised manner. Comprehensive experimental results on both PASCAL VOC and MS COCO detection datasets demonstrate the effectiveness of the proposed method. In particular, with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of 32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on VOC2007 with an inference speed of 13.0 milliseconds per image.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Despite the fact that face detection has been studied intensively over the past several decades, the problem is still not completely solved. Challenging conditions, such as extreme pose, lighting, and occlusion, have historically hampered traditional, model-based methods. In contrast, exemplar-based face detection has been shown to be effective, even under these challenging conditions, primarily because a large exemplar database is leveraged to cover all possible visual variations. However, relying heavily on a large exemplar database to deal with the face appearance variations makes the detector impractical due to the high space and time complexity. We construct an efficient boosted exemplar-based face detector which overcomes the defect of the previous work by being faster, more memory efficient, and more accurate. In our method, exemplars as weak detectors are discriminatively trained and selectively assembled in the boosting framework which largely reduces the number of required exemplars. Notably, we propose to include non-face images as negative exemplars to actively suppress false detections to further improve the detection accuracy. We verify our approach over two public face detection benchmarks and one personal photo album, and achieve significant improvement over the state-of-the-art algorithms in terms of both accuracy and efficiency.", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "", "", "", "This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.", "We introduce the Single Stage Headless (SSH) face detector. Unlike two stage proposal-classification detectors, SSH detects faces in a single stage directly from the early convolutional layers in a classification network. SSH is headless. That is, it is able to achieve state-of-the-art results while removing the \"head\" of its underlying classification network -- i.e. all fully connected layers in the VGG-16 which contains a large number of parameters. Additionally, instead of relying on an image pyramid to detect faces with various scales, SSH is scale-invariant by design. We simultaneously detect faces with different scales in a single forward pass of the network, but from different layers. These properties make SSH fast and light-weight. Surprisingly, with a headless VGG-16, SSH beats the ResNet-101-based state-of-the-art on the WIDER dataset. Even though, unlike the current state-of-the-art, SSH does not use an image pyramid and is 5X faster. Moreover, if an image pyramid is deployed, our light-weight network achieves state-of-the-art on all subsets of the WIDER dataset, improving the AP by 2.5 . SSH also reaches state-of-the-art results on the FDDB and Pascal-Faces datasets while using a small input size, leading to a runtime of 50 ms image on a GPU. The code is available at this https URL.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1811.11662
2903411490
Recent anchor-based deep face detectors have achieved promising performance, but they are still struggling to detect hard faces, such as small, blurred and partially occluded faces. A reason is that they treat all images and faces equally, without putting more effort on hard ones; however, many training images only contain easy faces, which are less helpful to achieve better performance on hard images. In this paper, we propose that the robustness of a face detector against hard faces can be improved by learning small faces on hard images. Our intuitions are (1) hard images are the images which contain at least one hard face, thus they facilitate training robust face detectors; (2) most hard faces are small faces and other types of hard faces can be easily converted to small faces by shrinking. We build an anchor-based deep face detector, which only output a single feature map with small anchors, to specifically learn small faces and train it by a novel hard image mining strategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal Faces, and AFW datasets to show the effectiveness of our method. Our method achieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val dataset respectively, which surpass the previous state-of-the-arts, especially on the hard subset. Code and model are available at this https URL.
Recent state-of-the-art face detectors are generally built based on Faster-RCNN @cite_8 , R-FCN @cite_14 or SSD @cite_33 . SSH @cite_23 exploited the RPN (Region Proposal Network) from Faster-RCNN to detect faces, by building three detection feature maps and designing six anchors with different sizes attached to the detection feature maps. S @math FD @cite_44 and PyramidBox @cite_29 , on the other hand, adopted SSD as their detection architecture with six different detection feature maps. Different from S @math FD, PyramidBox exploited a feature pyramid-style structure to combine features from different detection feature maps. Our proposed method, on the other hand, only builds single level detection feature map, based on VGG16, for classification and bounding-box regression, which is both simple and effective.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_8", "@cite_29", "@cite_44", "@cite_23" ], "mid": [ "2950800384", "2193145675", "2953106684", "2790025297", "2750317406", "2747648373" ], "abstract": [ "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "Face detection has been well studied for many years and one of the remaining challenges is to detect small, blurred and partially occluded faces in uncontrolled environment. This paper proposes a novel context-assisted single shot face detector, named PyramidBox, to handle the hard face detection problem. Observing the importance of the context, we improve the utilization of contextual information in the following three aspects. First, we design a novel contextual anchor to supervise high-level contextual feature learning by a semi-supervised method, which we call it PyramidAnchors. Second, we propose the Low-level Feature Pyramid Network to combine adequate high-level contextual semantic feature and Low-level facial feature together, which also allows the PyramidBox to predict faces of all scales in a single shot. Third, we introduce a context-sensitive structure to increase the capacity of prediction network to improve the final accuracy of output. In addition, we use the method of Data-anchor-sampling to augment the training samples across different scales, which increases the diversities of training data for smaller faces. By exploiting the value of context, PyramidBox achieves superior performance among the state-of-the-art on the two common face detection benchmarks, FDDB and WIDER FACE.", "This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.", "We introduce the Single Stage Headless (SSH) face detector. Unlike two stage proposal-classification detectors, SSH detects faces in a single stage directly from the early convolutional layers in a classification network. SSH is headless. That is, it is able to achieve state-of-the-art results while removing the \"head\" of its underlying classification network -- i.e. all fully connected layers in the VGG-16 which contains a large number of parameters. Additionally, instead of relying on an image pyramid to detect faces with various scales, SSH is scale-invariant by design. We simultaneously detect faces with different scales in a single forward pass of the network, but from different layers. These properties make SSH fast and light-weight. Surprisingly, with a headless VGG-16, SSH beats the ResNet-101-based state-of-the-art on the WIDER dataset. Even though, unlike the current state-of-the-art, SSH does not use an image pyramid and is 5X faster. Moreover, if an image pyramid is deployed, our light-weight network achieves state-of-the-art on all subsets of the WIDER dataset, improving the AP by 2.5 . SSH also reaches state-of-the-art results on the FDDB and Pascal-Faces datasets while using a small input size, leading to a runtime of 50 ms image on a GPU. The code is available at this https URL." ] }
1811.11662
2903411490
Recent anchor-based deep face detectors have achieved promising performance, but they are still struggling to detect hard faces, such as small, blurred and partially occluded faces. A reason is that they treat all images and faces equally, without putting more effort on hard ones; however, many training images only contain easy faces, which are less helpful to achieve better performance on hard images. In this paper, we propose that the robustness of a face detector against hard faces can be improved by learning small faces on hard images. Our intuitions are (1) hard images are the images which contain at least one hard face, thus they facilitate training robust face detectors; (2) most hard faces are small faces and other types of hard faces can be easily converted to small faces by shrinking. We build an anchor-based deep face detector, which only output a single feature map with small anchors, to specifically learn small faces and train it by a novel hard image mining strategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal Faces, and AFW datasets to show the effectiveness of our method. Our method achieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val dataset respectively, which surpass the previous state-of-the-arts, especially on the hard subset. Code and model are available at this https URL.
SNIP @cite_13 discussed an alternative approach to handle scales. It showed that CNNs are not robust to changes in scale, so training and testing on the same scales of an image pyramid can be a more optimal strategy. In our paper, we exploit this idea by limiting the anchor sizes to be ( @math ), ( @math ) and ( @math ). Then those faces with either too small or too big sizes will not be matched to any of the anchors, thus will be ignored during the training and testing. By removing those large anchors with sizes larger than ( @math ), our network focuses more on small faces which are potentially more difficult. To deal with large faces, we use multiscale training and testing to resize them to match our anchors. Experiments show this design performs well on both small and big faces, although it has fewer detection feature maps and anchor sizes.
{ "cite_N": [ "@cite_13" ], "mid": [ "2951581050" ], "abstract": [ "An analysis of different techniques for recognizing and detecting objects under extreme scale variation is presented. Scale specific and scale invariant design of detectors are compared by training them with different configurations of input data. By evaluating the performance of different network architectures for classifying small objects on ImageNet, we show that CNNs are not robust to changes in scale. Based on this analysis, we propose to train and test detectors on the same scales of an image-pyramid. Since small and large objects are difficult to recognize at smaller and larger scales respectively, we present a novel training scheme called Scale Normalization for Image Pyramids (SNIP) which selectively back-propagates the gradients of object instances of different sizes as a function of the image scale. On the COCO dataset, our single model performance is 45.7 and an ensemble of 3 networks obtains an mAP of 48.3 . We use off-the-shelf ImageNet-1000 pre-trained models and only train with bounding box supervision. Our submission won the Best Student Entry in the COCO 2017 challenge. Code will be made available at this http URL ." ] }
1811.11510
2902643394
Person re-identification is to retrieval pedestrian images from no-overlap camera views detected by pedestrian detectors. Most existing person re-identification (re-ID) models often fail to generalize well from the source domain where the models are trained to a new target domain without labels, because of the bias between the source and target domain. This issue significantly limits the scalability and usability of the models in the real world. Providing a labeled source training set and an unlabeled target training set, the aim of this paper is to improve the generalization ability of re-ID models to the target domain. To this end, we propose an image generative network named identity preserving generative adversarial network (IPGAN). The proposed method has two excellent properties: 1) only a single model is employed to translate the labeled images from the source domain to the target camera domains in an unsupervised manner; 2) The identity information of images from the source domain is preserved before and after translation. Furthermore, we propose IBN-reID model for the person re-identification task. It has better generalization ability than baseline models, especially in the cases without any domain adaptation. The IBN-reID model is trained on the translated images by supervised methods. Experimental results on Market-1501 and DukeMTMC-reID show that the images generated by IPGAN are more suitable for cross-domain person re-identification. Very competitive re-ID accuracy is achieved by our method.
Generative adversarial networks (GANs) @cite_41 has shown remarkable performance improvement in various computer vision tasks, especially image-to-image translation, in recent years. For image-to-image translation task, pix2pix @cite_32 uses a conditional GANs to learn mappings from input to output images by combining adversarial loss and @math loss. However, this method needs paired data to train its model. For unpaired image-to-image translation, several methods are proposed @cite_42 @cite_23 @cite_17 @cite_4 . UNIT @cite_46 combines variational autoencoders @cite_43 and CoGAN @cite_18 , in which the two generators share same weights. CycleGAN @cite_42 and DiscoGAN @cite_23 use a cycle consistency to preserve key attributes. However, all the aforementioned frameworks only consider the mapping from source domain to target domain. Different form them, we propose a new framework which can translate images from source domain to each target camera domains using only a single model and be used to improve the performance of cross domain person re-ID.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_41", "@cite_42", "@cite_32", "@cite_43", "@cite_23", "@cite_46", "@cite_17" ], "mid": [ "2471149695", "2949257576", "2099471712", "", "", "", "2951939904", "", "2608015370" ], "abstract": [ "We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.", "The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at this https URL", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "", "", "", "While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available this https URL", "", "Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data." ] }
1811.11510
2902643394
Person re-identification is to retrieval pedestrian images from no-overlap camera views detected by pedestrian detectors. Most existing person re-identification (re-ID) models often fail to generalize well from the source domain where the models are trained to a new target domain without labels, because of the bias between the source and target domain. This issue significantly limits the scalability and usability of the models in the real world. Providing a labeled source training set and an unlabeled target training set, the aim of this paper is to improve the generalization ability of re-ID models to the target domain. To this end, we propose an image generative network named identity preserving generative adversarial network (IPGAN). The proposed method has two excellent properties: 1) only a single model is employed to translate the labeled images from the source domain to the target camera domains in an unsupervised manner; 2) The identity information of images from the source domain is preserved before and after translation. Furthermore, we propose IBN-reID model for the person re-identification task. It has better generalization ability than baseline models, especially in the cases without any domain adaptation. The IBN-reID model is trained on the translated images by supervised methods. Experimental results on Market-1501 and DukeMTMC-reID show that the images generated by IPGAN are more suitable for cross-domain person re-identification. Very competitive re-ID accuracy is achieved by our method.
Most existing re-ID models are based on supervised learning @cite_31 @cite_7 @cite_30 @cite_15 @cite_0 @cite_56 @cite_34 @cite_47 . These models suffer from poor scalability in the real-world environment. To solve this scalability issue and improve the generalization ability, unsupervised methods based on hand-crafted features @cite_19 @cite_11 @cite_26 @cite_28 @cite_20 @cite_8 @cite_57 @cite_30 @cite_35 can be chosen and applied. These methods aim to design or learn robust feature for person re-ID. But, they ignore the distribution of samples in the dataset and yield much weaker performance on large-scale dataset than supervised learning methods.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_47", "@cite_26", "@cite_7", "@cite_15", "@cite_28", "@cite_8", "@cite_56", "@cite_0", "@cite_19", "@cite_57", "@cite_31", "@cite_34", "@cite_20", "@cite_11" ], "mid": [ "", "1979260620", "", "", "", "2300840837", "1518138188", "", "2342611082", "", "", "", "2068042582", "", "", "" ], "abstract": [ "", "In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.", "", "", "", "Most existing person re-identification (re-id) methods focus on learning the optimal distance metrics across camera views. Typically a person's appearance is represented using features of thousands of dimensions, whilst only hundreds of training samples are available due to the difficulties in collecting matched training images. With the number of training samples much smaller than the feature dimension, the existing methods thus face the classic small sample size (SSS) problem and have to resort to dimensionality reduction techniques and or matrix regularisation, which lead to loss of discriminative power. In this work, we propose to overcome the SSS problem in re-id distance metric learning by matching people in a discriminative null space of the training data. In this null space, images of the same person are collapsed into a single point thus minimising the within-class scatter to the extreme and maximising the relative between-class separation simultaneously. Importantly, it has a fixed dimension, a closed-form solution and is very efficient to compute. Extensive experiments carried out on five person re-identification benchmarks including VIPeR, PRID2011, CUHK01, CUHK03 and Market1501 show that such a simple approach beats the state-of-the-art alternatives, often by a big margin.", "Viewpoint invariant pedestrian recognition is an important yet under-addressed problem in computer vision. This is likely due to the difficulty in matching two objects with unknown viewpoint and pose. This paper presents a method of performing viewpoint invariant pedestrian recognition using an efficiently and intelligently designed object representation, the ensemble of localized features (ELF). Instead of designing a specific feature by hand to solve the problem, we define a feature space using our intuition about the problem and let a machine learning algorithm find the best representation. We show how both an object class specific representation and a discriminative recognition model can be learned using the AdaBoost algorithm. This approach allows many different kinds of simple features to be combined into a single similarity function. The method is evaluated using a viewpoint invariant pedestrian recognition dataset and the results are shown to be superior to all previous benchmarks for both recognition and reacquisition of pedestrians.", "", "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.", "", "", "", "In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art.", "", "", "" ] }
1811.11431
2902709614
We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on four different tasks: (1) object classification, (2) semantic segmentation, (3) object detection, and (4) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-of-the-art methods. Our network outperforms ESPNet by 4-5 and has 2-4x fewer FLOPs on the PASCAL VOC and the Cityscapes dataset. Compared to YOLOv2 on the MS-COCO object detection, ESPNetv2 delivers 4.4 higher accuracy with 6x fewer FLOPs. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at this https URL
Most state-of-the-art efficient networks @cite_22 @cite_12 @cite_16 use depth-wise separable convolutions @cite_22 that factor a convolution into two steps to reduce computational complexity: (1) depth-wise convolution that performs light-weight filtering by applying a single convolutional kernel per input channel and (2) point-wise convolution that usually expands the feature map along channels by learning linear combinations of the input channels. Another efficient form of convolution that has been used in efficient networks @cite_38 @cite_4 is group convolution @cite_0 , wherein input channels and convolutional kernels are factored into groups and each group is convolved independently. The network extends the ESPNet network @cite_35 using these efficient forms of convolutions. To learn representations from a large effective receptive field, uses depth-wise dilated" separable convolutions instead of depth-wise separable convolutions.
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_4", "@cite_22", "@cite_0", "@cite_16", "@cite_12" ], "mid": [ "2963125010", "2963418739", "2963993763", "2612445135", "2163605009", "", "" ], "abstract": [ "We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8 ) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves 13A— actual speedup over AlexNet while maintaining comparable accuracy.", "We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8 less. We evaluated ESPNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively. Our code is open-source and available at https: sacmehta.github.io ESPNet .", "Deep neural networks are increasingly used on mobile devices, where computational resources are limited. In this paper we develop CondenseNet, a novel network architecture with unprecedented efficiency. It combines dense connectivity with a novel module called learned group convolution. The dense connectivity facilitates feature re-use in the network, whereas learned group convolutions remove connections between layers for which this feature re-use is superfluous. At test time, our model can be implemented using standard group convolutions, allowing for efficient computation in practice. Our experiments show that CondenseNets are far more efficient than state-of-the-art compact convolutional networks such as ShuffleNets.", "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "", "" ] }
1811.11431
2902709614
We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on four different tasks: (1) object classification, (2) semantic segmentation, (3) object detection, and (4) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-of-the-art methods. Our network outperforms ESPNet by 4-5 and has 2-4x fewer FLOPs on the PASCAL VOC and the Cityscapes dataset. Compared to YOLOv2 on the MS-COCO object detection, ESPNetv2 delivers 4.4 higher accuracy with 6x fewer FLOPs. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at this https URL
In addition to convolutional factorization, a network's efficiency amd accuracy can be further improved using methods such as channel shuffle @cite_7 and channel split @cite_7 . Such methods are orthogonal to our work.
{ "cite_N": [ "@cite_7" ], "mid": [ "2883780447" ], "abstract": [ "Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff." ] }
1811.11431
2902709614
We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on four different tasks: (1) object classification, (2) semantic segmentation, (3) object detection, and (4) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-of-the-art methods. Our network outperforms ESPNet by 4-5 and has 2-4x fewer FLOPs on the PASCAL VOC and the Cityscapes dataset. Compared to YOLOv2 on the MS-COCO object detection, ESPNetv2 delivers 4.4 higher accuracy with 6x fewer FLOPs. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at this https URL
These approaches improve the inference of a pre-trained network by pruning network connections or channels @cite_56 @cite_6 @cite_11 @cite_27 @cite_20 . These approaches are effective, because CNNs have a substantial number of redundant weights. The efficiency gain in most of these approaches are due to the sparsity of parameters, and are difficult to efficiently implement on CPUs due to the cost of look-up and data migration operations. These approaches are complementary to our network.
{ "cite_N": [ "@cite_6", "@cite_56", "@cite_27", "@cite_20", "@cite_11" ], "mid": [ "2963674932", "2119144962", "2894936553", "2884751099", "2963000224" ], "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "We present COBLA—Constrained Optimization Based Low-rank Approximation—a systematic method of finding an optimal low-rank approximation of a trained convolutional neural network, subject to constraints in the number of multiply-accumulate (MAC) operations and the memory footprint. COBLA optimally allocates the constrained computation resources into each layer of the approximated network. The singular value decomposition of the network weight is computed, then a binary masking variable is introduced to denote whether a particular singular value and the corresponding singular vectors are used in low-rank approximation. With this formulation, the number of the MAC operations and the memory footprint are represented as linear constraints in terms of the binary masking variables. The resulted 0–1 integer programming problem is approximately solved by sequential quadratic programming. COBLA does not introduce any hyperparameter. We empirically demonstrate that COBLA outperforms prior art using the SqueezeNet and VGG-16 architecture on the ImageNet dataset.", "Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish fine-grained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using (20 ) and (33 ) less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 ." ] }
1811.11431
2902709614
We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on four different tasks: (1) object classification, (2) semantic segmentation, (3) object detection, and (4) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-of-the-art methods. Our network outperforms ESPNet by 4-5 and has 2-4x fewer FLOPs on the PASCAL VOC and the Cityscapes dataset. Compared to YOLOv2 on the MS-COCO object detection, ESPNetv2 delivers 4.4 higher accuracy with 6x fewer FLOPs. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at this https URL
Another approach to improve inference of a pre-trained network is low-bit representation of network weights using quantization @cite_48 @cite_29 @cite_31 @cite_10 @cite_46 @cite_42 @cite_50 . These approaches use fewer bits to represent weights of a pre-trained network instead of 32-bit high-precision floating points. Similar to network compression-based methods, these approaches are complementary to our work.
{ "cite_N": [ "@cite_31", "@cite_48", "@cite_29", "@cite_42", "@cite_50", "@cite_46", "@cite_10" ], "mid": [ "2233116163", "2161758346", "2300242332", "2950894517", "2963145956", "2469490737", "2319920447" ], "abstract": [ "Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 6× speed-up and 15 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.", "Multilayer Neural Networks (MNNs) are commonly trained using gradient descent-based methods, such as BackPropagation (BP). Inference in probabilistic graphical models is often done using variational Bayes methods, such as Expectation Propagation (EP). We show how an EP based approach can also be used to train deterministic MNNs. Specifically, we approximate the posterior of the weights given the data using a \"mean-field\" factorized distribution, in an online setting. Using online EP and the central limit theorem we find an analytical approximation to the Bayes update of this posterior, as well as the resulting Bayes estimates of the weights and outputs. Despite a different origin, the resulting algorithm, Expectation BackPropagation (EBP), is very similar to BP in form and efficiency. However, it has several additional advantages: (1) Training is parameter-free, given initial conditions (prior) and the MNN architecture. This is useful for large-scale problems, where parameter tuning is a major challenge. (2) The weights can be restricted to have discrete values. This is especially useful for implementing trained MNNs in precision limited hardware chips, thus improving their speed and energy efficiency by several orders of magnitude. We test the EBP algorithm numerically in eight binary text classification tasks. In all tasks, EBP outperforms: (1) standard BP with the optimal constant learning rate (2) previously reported state of the art. Interestingly, EBP-trained MNNs with binary weights usually perform better than MNNs with continuous (real) weights - if we average the MNN output using the inferred posterior.", "We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32 ( ) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58 ( ) faster convolutional operations (in terms of number of the high precision operations) and 32 ( ) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than (16 , ) in top-1 accuracy. Our code is available at: http: allenai.org plato xnornet.", "We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves @math top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.", "Convolutional neural networks (CNNs) have revolutionized the world of computer vision over the last few years, pushing image classification beyond human accuracy. The computational effort of today’s CNNs requires power-hungry parallel processors or GP-GPUs. Recent developments in CNN accelerators for system-on-chip integration have reduced energy consumption significantly. Unfortunately, even these highly optimized devices are above the power envelope imposed by mobile and deeply embedded applications and face hard limitations caused by CNN weight I O and storage. This prevents the adoption of CNNs in future ultralow power Internet of Things end-nodes for near-sensor analytics. Recent algorithmic and theoretical advancements enable competitive classification accuracy even when limiting CNNs to binary (+1 −1) weights during training. These new findings bring major optimization opportunities in the arithmetic core by removing the need for expensive multiplications, as well as reducing I O bandwidth and storage. In this paper, we present an accelerator optimized for binary-weight CNNs that achieves 1.5 TOp s at 1.2 V on a core area of only 1.33 million gate equivalent (MGE) or 1.9 mm 2 and with a power dissipation of 895 @math W in UMC 65-nm technology at 0.6 V. Our accelerator significantly outperforms the state-of-the-art in terms of energy and area efficiency achieving 61.2 TOp s W@0.6 V and 1.1 TOp s MGE@1.2 V, respectively.", "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward backward passes can now operate on low bitwidth weights and activations gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1 top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.", "We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line." ] }
1811.11365
2902169904
Unsupervised neural machine translation (UNMT) has recently achieved remarkable results with only large monolingual corpora in each language. However, the uncertainty of associating target with source sentences makes UNMT theoretically an ill-posed problem. This work investigates the possibility of utilizing images for disambiguation to improve the performance of UNMT. Our assumption is intuitively based on the invariant property of image, i.e., the description of the same visual content by different languages should be approximately similar. We propose an unsupervised multi-modal machine translation (UMNMT) framework based on the language translation cycle consistency loss conditional on the image, targeting to learn the bidirectional multi-modal translation simultaneously. Through an alternate training between multi-modal and uni-modal, our inference model can translate with or without the image. On the widely used Multi30K dataset, the experimental results of our approach are significantly better than those of the text-only UNMT on the 2016 test dataset.
Existing methods in this area @cite_24 @cite_21 @cite_17 are mainly modifications of encoder-decoder schema. Their key ideas are to build a common latent space between the two languages (or domains) and to learn to translate by reconstructing in both domains. The difficulty in multi-modal translation is the involvement of another visual domain, which is quite different from the language domain. The interaction between image and text are usually not symmetric as two text domains. This is the reason why we take care of the attention module cautiously.
{ "cite_N": [ "@cite_24", "@cite_21", "@cite_17" ], "mid": [ "2962824887", "2963602293", "" ], "abstract": [ "In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal. In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora. Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation. Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation. The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively.", "Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.", "" ] }
1811.11365
2902169904
Unsupervised neural machine translation (UNMT) has recently achieved remarkable results with only large monolingual corpora in each language. However, the uncertainty of associating target with source sentences makes UNMT theoretically an ill-posed problem. This work investigates the possibility of utilizing images for disambiguation to improve the performance of UNMT. Our assumption is intuitively based on the invariant property of image, i.e., the description of the same visual content by different languages should be approximately similar. We propose an unsupervised multi-modal machine translation (UMNMT) framework based on the language translation cycle consistency loss conditional on the image, targeting to learn the bidirectional multi-modal translation simultaneously. Through an alternate training between multi-modal and uni-modal, our inference model can translate with or without the image. On the widely used Multi30K dataset, the experimental results of our approach are significantly better than those of the text-only UNMT on the 2016 test dataset.
Most standard image caption models are built on CNN-RNN based encoder-decoder framework @cite_11 @cite_26 , where the visual features are extracted from CNN and then fed into RNN to output word sequences as captions. Since our corpora contain image-text paired data, our method also draws inspiration from image caption modeling. Thus, we also embed the image-caption model within our computational graph, whereas the transformer architecture is adopted as a substitution for RNN.
{ "cite_N": [ "@cite_26", "@cite_11" ], "mid": [ "1895577753", "2951805548" ], "abstract": [ "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations." ] }
1811.11365
2902169904
Unsupervised neural machine translation (UNMT) has recently achieved remarkable results with only large monolingual corpora in each language. However, the uncertainty of associating target with source sentences makes UNMT theoretically an ill-posed problem. This work investigates the possibility of utilizing images for disambiguation to improve the performance of UNMT. Our assumption is intuitively based on the invariant property of image, i.e., the description of the same visual content by different languages should be approximately similar. We propose an unsupervised multi-modal machine translation (UMNMT) framework based on the language translation cycle consistency loss conditional on the image, targeting to learn the bidirectional multi-modal translation simultaneously. Through an alternate training between multi-modal and uni-modal, our inference model can translate with or without the image. On the widely used Multi30K dataset, the experimental results of our approach are significantly better than those of the text-only UNMT on the 2016 test dataset.
This problem is first proposed by @cite_12 on the WMT16 shared task at the intersection of natural language processing and computer vision. It can be considered as building a multi-source encoder on top of either MT or image caption model, depending on the definition of extra source. Most Multi-modal MT research still focuses on the supervised setting, while @cite_8 @cite_18 , to our best knowledge, are the two pioneering works that consider generalizing the Multi-modal MT to an unsupervised setting. However, their setup puts restrictions on the input data format. For example, @cite_8 requires the training data to be image text pair but the inference data is text-only input, and @cite_18 requires image text pair format for both training and testing. These limit the model scale and generalization ability, since large amount of monolingual corpora is more available and less expensive. Thus, in our model, we specifically address this issue with controllable attention and alternative training scheme.
{ "cite_N": [ "@cite_18", "@cite_12", "@cite_8" ], "mid": [ "2573834658", "2509282593", "2962830144" ], "abstract": [ "We propose an approach to build a neural machine translation system with no supervised resources (i.e., no parallel corpora) using multimodal embedded representation over texts and images. Based on the assumption that text documents are often likely to be described with other multimedia information (e.g., images) somewhat related to the content, we try to indirectly estimate the relevance between two languages. Using multimedia as the \"pivot\", we project all modalities into one common hidden space where samples belonging to similar semantic concepts should come close to each other, whatever the observed space of each sample is. This modality-agnostic representation is the key to bridging the gap between different modalities. Putting a decoder on top of it, our network can flexibly draw the outputs from any input modality. Notably, in the testing phase, we need only source language texts as the input for translation. In experiments, we tested our method on two benchmarks to show that it can achieve reasonable translation performance. We compared and investigated several possible implementations and found that an end-to-end model that simultaneously optimized both rank loss in multimodal encoders and cross-entropy loss in decoders performed the best.", "This paper introduces and summarises the findings of a new shared task at the intersection of Natural Language Processing and Computer Vision: the generation of image descriptions in a target language, given an image and or one or more descriptions in a different (source) language. This challenge was organised along with the Conference on Machine Translation (WMT16), and called for system submissions for two task variants: (i) a translation task, in which a source language image description needs to be translated to a target language, (optionally) with additional cues from the corresponding image, and (ii) a description generation task, in which a target language description needs to be generated for an image, (optionally) with additional cues from source language descriptions of the same image. In this first edition of the shared task, 16 systems were submitted for the translation task and seven for the image description task, from a total of 10 teams.", "" ] }
1811.11329
2902243259
Reinforcement learning has steadily improved and outperform human in lots of traditional games since the resurgence of deep neural network. However, these success is not easy to be copied to autonomous driving because the state spaces in real world are extreme complex and action spaces are continuous and fine control is required. Moreover, the autonomous driving vehicles must also keep functional safety under the complex environments. To deal with these challenges, we first adopt the deep deterministic policy gradient (DDPG) algorithm, which has the capacity to handle complex state and action spaces in continuous domain. We then choose The Open Racing Car Simulator (TORCS) as our environment to avoid physical damage. Meanwhile, we select a set of appropriate sensor information from TORCS and design our own rewarder. In order to fit DDPG algorithm to TORCS, we design our network architecture for both actor and critic inside DDPG paradigm. To demonstrate the effectiveness of our model, We evaluate on different modes in TORCS and show both quantitative and qualitative results.
Different from value-based methods, policy-based methods learn the policy directly. In other words, policy-based methods output actions given current state. Silver @cite_19 propose a deterministic policy gradient algorithm to handle continuous action spaces efficiently without losing adequate exploration. By combining idea from DQN and actor-critic, Lillicrap @cite_13 then propose a deep deterministic policy gradient method and achieve end-to-end policy learning. Very recently, PGQL @cite_16 is proposed and can even outperform A3C by combining off-policy Q-learning with policy gradient. More importantly, in terms of autonomous driving, action spaces are continuous and fine control is required. All these policy-gradient methods can naturally handle the continuous action spaces. However, adapting value-based methods, such as DQN, to continuous domain by discretizing continuous action spaces might cause curse of dimensionality and can not meet the requirements of fine control.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_13" ], "mid": [ "2165150801", "", "2173248099" ], "abstract": [ "In this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions. The deterministic policy gradient has a particularly appealing form: it is the expected gradient of the action-value function. This simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient. To ensure adequate exploration, we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy. We demonstrate that deterministic policy gradient algorithms can significantly outperform their stochastic counterparts in high-dimensional action spaces.", "", "We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs." ] }
1811.11368
2902690152
This paper studies distributed estimation and inference for a general statistical problem with a convex loss that could be non-differentiable. For the purpose of efficient computation, we restrict ourselves to stochastic first-order optimization, which enjoys low per-iteration complexity. To motivate the proposed method, we first investigate the theoretical properties of a straightforward Divide-and-Conquer Stochastic Gradient Descent (DC-SGD) approach. Our theory shows that there is a restriction on the number of machines and this restriction becomes more stringent when the dimension @math is large. To overcome this limitation, this paper proposes a new multi-round distributed estimation procedure that approximates the Newton step only using stochastic subgradient. The key component in our method is the proposal of a computationally efficient estimator of @math , where @math is the population Hessian matrix and @math is any given vector. Instead of estimating @math (or @math ) that usually requires the second-order differentiability of the loss, the proposed First-Order Newton-type Estimator (FONE) directly estimates the vector of interest @math as a whole and is applicable to non-differentiable losses. Our estimator also facilitates the inference for the empirical risk minimizer. It turns out that the key term in the limiting covariance has the form of @math , which can be estimated by FONE.
In addition, our FONE of @math is related to a recently developed stochastic first-order approach---stochastic variance reduced gradient (SVRG, see e.g., @cite_21 @cite_10 @cite_1 and references therein). Our method subsumes SVRG as a special case. Indeed, when the @math , our iterative algorithm (non-distributed version) essentially reduces to SVRG. On the other hand, we allow a general @math vector, which does not need to be an averaged gradient (e.g., for the purpose of inference in ). Moreover, the theoretical development of SVRG requires the unbiasedness of the stochastic gradient with respect to the averaged gradient @math , the differentiability, and uniform strong convexity of the loss function @math . In contrast, our theoretical results do not require any of these conditions. In fact, the motivation for our procedure is fundamentally different from that for SVRG: our method is to provide an estimator @math with the population matrix @math for any @math ; while most SVRG literature aims to solve a finite-sum optimization problem @math for a differentiable strongly-convex @math .
{ "cite_N": [ "@cite_1", "@cite_21", "@cite_10" ], "mid": [ "2722088290", "2107438106", "2769669795" ], "abstract": [ "We present novel minibatch stochastic optimization methods for empirical risk minimization problems, the methods efficiently leverage variance reduced first-order and sub-sampled higher-order information to accelerate the convergence speed. For quadratic objectives, we prove improved iteration complexity over state-of-the-art under reasonable assumptions. We also provide empirical evidence of the advantages of our method compared to existing approaches in the literature.", "Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.", "" ] }
1811.11323
2903481008
Most existing methods for object segmentation in computer vision are formulated as a labeling task. This, in general, could be transferred to a pixel-wise label assignment task, which is quite similar to the structure of hidden Markov random field. In terms of Markov random field, each pixel can be regarded as a state and has a transition probability to its neighbor pixel, the label behind each pixel is a latent variable and has an emission probability from its corresponding state. In this paper, we reviewed several modern image labeling methods based on Markov random field and conditional random Field. And we compare the result of these methods with some classical image labeling methods. The experiment demonstrates that the introduction of Markov random field and conditional random field make a big difference in the segmentation result.
One response to image labeling task is traditional unsupervised methods involving all kinds of low level information. By measuring color @cite_22 and texture similarity, lots of attempts involving clustering methods such as k-means @cite_20 and mean-shift @cite_16 try to group the local region with higher similarity and assign them with a same label. For computational convenience, superpixel @cite_27 @cite_9 approach is often used as pre-processing step to group the potential similar pixel together. Then all previous approach works on the superpixel group instead of individual pixel. But the problem of these approach occurs due to the use of only local information. So research work starts to combine global and local context to get a better sense of semantic information in the image. In @cite_19 a multi-scale structure is used for segmenting the image based on edge detection with different scale level. In @cite_25 @cite_5 , combination of global context and local informationis proved to be very useful in segmentation task.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_19", "@cite_27", "@cite_5", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "2460588056", "", "1572910856", "2010975286", "", "2067191022", "1817277359", "" ], "abstract": [ "To aid an automatic taxiing system for unmanned aircraft, this paper presents a colour based method for semantic segmentation and image classification in an aerodrome environment with the intention to use the classification output to aid navigation and collision avoidance. Based on previous work, this machine vision system uses semantic segmentation to interpret the scene. Following an initial superpixel based segmentation procedure, a colour based Bayesian Network classifier is trained and used to semantically classify each segmented cluster. HSV colourspace is adopted as it is close to the way of human vision perception of the world, and each channel shows significant differentiation between classes. Luminance is used to identify surface lines on the taxiway, which is then fused with colour classification to give improved classification results. The classification performance of the proposed colour based classifier is tested in a real aerodrome, which demonstrates that the proposed method outperforms a previously developed texture only based method.", "", "In this paper, we propose a novel multi-scale edge detection and vector field design scheme. We show that using multiscale techniques edge detection and segmentation quality on natural images can be improved significantly. Our approach eliminates the need for explicit scale selection and edge tracking. Our method favors edges that exist at a wide range of scales and localize these edges at finer scales. This work is then extended to multi-scale image segmentation using our anisotropic diffusion scheme.", "Grouping cues can affect the performance of segmentation greatly. In this paper, we show that superpixels (image segments) can provide powerful grouping cues to guide segmentation, where superpixels can be collected easily by (over)-segmenting the image using any reasonable existing segmentation algorithms. Generated by different algorithms with varying parameters, superpixels can capture diverse and multi-scale visual patterns of a natural image. Successful integration of the cues from a large multitude of superpixels presents a promising yet not fully explored direction. In this paper, we propose a novel segmentation framework based on bipartite graph partitioning, which is able to aggregate multi-layer superpixels in a principled and very effective manner. Computationally, it is tailored to unbalanced bipartite graph structure and leads to a highly efficient, linear-time spectral algorithm. Our method achieves significantly better performance on the Berkeley Segmentation Database compared to state-of-the-art techniques.", "", "A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.", "We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-the-art performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at this https URL .", "" ] }
1811.11323
2903481008
Most existing methods for object segmentation in computer vision are formulated as a labeling task. This, in general, could be transferred to a pixel-wise label assignment task, which is quite similar to the structure of hidden Markov random field. In terms of Markov random field, each pixel can be regarded as a state and has a transition probability to its neighbor pixel, the label behind each pixel is a latent variable and has an emission probability from its corresponding state. In this paper, we reviewed several modern image labeling methods based on Markov random field and conditional random Field. And we compare the result of these methods with some classical image labeling methods. The experiment demonstrates that the introduction of Markov random field and conditional random field make a big difference in the segmentation result.
The first time that Markov random field is introduced for segmentation is in @cite_23 , where it's only for the application of medical image. Then the attempt of modeling segmentation task with Markov random field is largely growing due to the similarity of their representation. But learning and inference for Markov random field is quite computationally expensive so that lots of approximation and learning scheme is invented for obtaining the robust segmentation result. In addition to Markov random field, one point worth to highlight is conditional random field, which in @cite_4 @cite_17 @cite_12 is heavily used for segmentation in the recent decades. The benefit of conditional random field is that, for image labeling task, we actually don't care about the joint distribution of image pixel. So modeling only the label conditioned on image observation is enough and will decrease huge mount of parameters to estimate. Most recently, with the revolution of deep learning, conditional random field is usually used as a post-precessing step in @cite_21 to refine the segmentation result produced by deep convolutional neural network. More attempt @cite_7 etries to incorporate conditional random field into network architecture instead of post processing and achieve better result.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_21", "@cite_23", "@cite_12", "@cite_17" ], "mid": [ "2095844239", "2124592697", "2952865063", "2136573752", "", "" ], "abstract": [ "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.", "Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation-no spatial information is taken into account. This causes the FM model to work only on well-defined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM model-based methods produce unreliable results. Here, the authors propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM model-based approach. To fit the HMRF model, an EM algorithm is used. The authors show that by incorporating both the HMRF model and the EM algorithm into a HMRF-EM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRF-EM framework can easily be combined with other techniques. As an example, the authors show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a three-dimensional fully automated approach for brain MR image segmentation.", "", "" ] }
1811.11283
2902912462
Most of the existing work on automatic facial expression analysis focuses on discrete emotion recognition, or facial action unit detection. However, facial expressions do not always fall neatly into pre-defined semantic categories. Also, the similarity between expressions measured in the action unit space need not correspond to how humans perceive expression similarity. Different from previous work, our goal is to describe facial expressions in a continuous fashion using a compact embedding space that mimics human visual preferences. To achieve this goal, we collect a large-scale faces-in-the-wild dataset with human annotations in the form: Expressions A and B are visually more similar when compared to expression C, and use this dataset to train a neural network that produces a compact (16-dimensional) expression embedding. We experimentally demonstrate that the learned embedding can be successfully used for various applications such as expression retrieval, photo album summarization, and emotion recognition. We also show that the embedding learned using the proposed dataset performs better than several other embeddings learned using existing emotion or action unit datasets.
A self-supervised approach was proposed in @cite_7 to learn a 256-dimensional facial attribute embedding by watching videos, and the learned embedding was used for multiple tasks such as head pose estimation, facial landmarks prediction, and emotion recognition by training an additional classification or regression layer using labeled training data. However, as reported in @cite_7 , its performance is worse than existing approaches on these tasks. Different from @cite_7 , we follow a fully-supervised approach for learning a compact (16-dimensional) expression embedding. Several existing works have used triplet-based loss functions for learning image representations. While majority of them use category label-based triplets @cite_54 @cite_23 @cite_6 @cite_0 @cite_34 @cite_17 @cite_9 @cite_46 , some existing works @cite_18 @cite_37 have focused on learning fine-grained representations. While @cite_37 used a similarity measure computed using several existing feature representations to generate groundtruth annotations for the triplets, @cite_18 used text-image relevance based on Google image search to annotate the triplets. Different from these approaches, we use human raters to annotate the triplets. Also, none of these works focus on facial expressions.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_7", "@cite_54", "@cite_9", "@cite_6", "@cite_0", "@cite_23", "@cite_46", "@cite_34", "@cite_17" ], "mid": [ "1532499126", "1975517671", "", "2895347732", "2963988212", "2598634450", "2470322391", "2963744743", "2964076257", "2096733369", "2963026686" ], "abstract": [ "Learning a measure of similarity between pairs of objects is an important generic problem in machine learning. It is particularly useful in large scale applications like searching for an image that is similar to a given image or finding videos that are relevant to a given video. In these tasks, users look for objects that are not only visually similar but also semantically related to a given object. Unfortunately, the approaches that exist today for learning such semantic similarity do not scale to large data sets. This is both because typically their CPU and storage requirements grow quadratically with the sample size, and because many methods impose complex positivity constraints on the space of learned similarity functions. The current paper presents OASIS, an Online Algorithm for Scalable Image Similarity learning that learns a bilinear similarity measure over sparse representations. OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost. Our experiments show that OASIS is both fast and accurate at a wide range of scales: for a data set with thousands of images, it achieves better results than existing state-of-the-art methods, while being an order of magnitude faster. For large, web scale, data sets, OASIS can be trained on more than two million images from 150K text queries within 3 days on a single CPU. On this large scale data set, human evaluations showed that 35 of the ten nearest neighbors of a given test image, as found by OASIS, were semantically relevant to that image. This suggests that query independent similarity could be accurately learned even for large scale data sets that could not be handled before.", "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.", "", "We present a novel hierarchical triplet loss (HTL) capable of automatically collecting informative training samples (triplets) via a defined hierarchical tree that encodes global context information. This allows us to cope with the main limitation of random sampling in training a conventional triplet loss, which is a central issue for deep metric learning. Our main contributions are two-fold. (i) we construct a hierarchical class-level tree where neighboring classes are merged recursively. The hierarchical structure naturally captures the intrinsic data distribution over the whole dataset. (ii) we formulate the problem of triplet collection by introducing a new violate margin, which is computed dynamically based on the designed hierarchical tree. This allows it to automatically select meaningful hard samples with the guide of global context. It encourages the model to learn more discriminative features from visual similar classes, leading to faster convergence and better performance. Our method is evaluated on the tasks of image retrieval and face recognition, where it outperforms the standard triplet loss substantially by 1 –18 , and achieves new state-of-the-art performance on a number of benchmarks.", "The modern image search system requires semantic understanding of image, and a key yet under-addressed problem is to learn a good metric for measuring the similarity between images. While deep metric learning has yielded impressive performance gains by extracting high level abstractions from image data, a proper objective loss function becomes the central issue to boost the performance. In this paper, we propose a novel angular loss, which takes angle relationship into account, for learning better similarity metric. Whereas previous metric learning methods focus on optimizing the similarity (contrastive loss) or relative similarity (triplet loss) of image pairs, our proposed method aims at constraining the angle at the negative point of triplet triangles. Several favorable properties are observed when compared with conventional methods. First, scale invariance is introduced, improving the robustness of objective against feature variance. Second, a third-order geometric constraint is inherently imposed, capturing additional local structure of triplet triangles than contrastive loss or triplet loss. Third, better convergence has been demonstrated by experiments on three publicly available datasets.", "In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.", "The growing explosion in the use of surveillance cameras in public security highlights the importance of vehicle search from a large-scale image or video database. However, compared with person re-identification or face recognition, vehicle search problem has long been neglected by researchers in vision community. This paper focuses on an interesting but challenging problem, vehicle re-identification (a.k.a precise vehicle search). We propose a Deep Relative Distance Learning (DRDL) method which exploits a two-branch deep convolutional network to project raw vehicle images into an Euclidean space where distance can be directly used to measure the similarity of arbitrary two vehicles. To further facilitate the future research on this problem, we also present a carefully-organized largescale image database \"VehicleID\", which includes multiple images of the same vehicle captured by different realworld cameras in a city. We evaluate our DRDL method on our VehicleID dataset and another recently-released vehicle model classification dataset \"CompCars\" in three sets of experiments: vehicle re-identification, vehicle model verification and vehicle retrieval. Experimental results show that our method can achieve promising results and outperforms several state-of-the-art approaches.", "Most existing 3D object recognition algorithms focus on leveraging the strong discriminative power of deep learning models with softmax loss for the classification of 3D data, while learning discriminative features with deep metric learning for 3D object retrieval is more or less neglected. In the paper, we study variants of deep metric learning losses for 3D object retrieval, which did not receive enough attention from this area. First, two kinds of representative losses, triplet loss and center loss, are introduced which could learn more discriminative features than traditional classification loss. Then, we propose a novel loss named triplet-center loss, which can further enhance the discriminative power of the features. The proposed triplet-center loss learns a center for each class and requires that the distances between samples and centers from the same class are closer than those from different classes. Extensive experimental results on two popular 3D object retrieval benchmarks and two widely-adopted sketch-based 3D shape retrieval benchmarks consistently demonstrate the effectiveness of our proposed loss, and significant improvements have been achieved compared with the state-of-the-arts.", "In this paper, we aim to learn a mapping (or embedding) from images to a compact binary space in which Hamming distances correspond to a ranking measure for the image retrieval task. We make use of a triplet loss because this has been shown to be most effective for ranking problems. However, training in previous works can be prohibitively expensive due to the fact that optimization is directly performed on the triplet space, where the number of possible triplets for training is cubic in the number of training examples. To address this issue, we propose to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages. To solve the first stage, we design a large-scale high-order binary codes inference algorithm to reduce the high-order objective to a standard binary quadratic problem such that graph cuts can be used to efficiently infer the binary codes which serve as the labels of each training datum. In the second stage we propose to map the original image to compact binary codes via carefully designed deep convolutional neural networks (CNNs) and the hashing function fitting can be solved by training binary CNN classifiers. An incremental interleaved optimization strategy is proffered to ensure that these two steps are interactive with each other during training for better accuracy. We conduct experiments on several benchmark datasets, which demonstrate both improved training time (by as much as two orders of magnitude) as well as producing state-of-the-art hashing for various retrieval tasks.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "Learning the distance metric between pairs of examples is of great importance for learning and visual recognition. With the remarkable success from the state of the art convolutional neural networks, recent works [1, 31] have shown promising results on discriminatively training the networks to learn semantic feature embeddings where similar examples are mapped close to each other and dissimilar examples are mapped farther apart. In this paper, we describe an algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of pairwise distances. This step enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem. Additionally, we collected Stanford Online Products dataset: 120k images of 23k classes of online products for metric learning. Our experiments on the CUB-200-2011 [37], CARS196 [19], and Stanford Online Products datasets demonstrate significant improvement over existing deep feature embedding methods on all experimented embedding sizes with the GoogLeNet [33] network. The source code and the dataset are available at: https: github.com rksltnl Deep-Metric-Learning-CVPR16." ] }
1811.11273
2903008052
We present a method for encoding game logs as numeric features in the card game Dominion. We then run the manifold learning algorithm t-SNE on these encodings to visualize the landscape of player strategies. By quantifying game states as the relative prevalence of cards in a player's deck, we create visualizations that capture qualitative differences in player strategies. Different ways of deviating from the starting game state appear as different rays in the visualization, giving it an intuitive explanation. This is a promising new direction for understanding player strategies across games that vary in length.
In an earlier work on clustering of personas, Kevin Gold developed intuitions for what certain players strive for, and then used Bayesian network models to cluster players into groups @cite_0 . However, the final clusters did not reflect the priors, and the vast majority of players were moved into a single cluster. Gold concluded that clusters may not as discrete as we might want. Many players will try a combination of synergies in a single game, just as many players enjoy both speedy gameplay and defeating monsters, and so in building off of Gold's conclusions we expect a continuum between different ways of playing.
{ "cite_N": [ "@cite_0" ], "mid": [ "2210286836" ], "abstract": [ "Probabilistic models were fit to logs of player actions in the card game Dominion in an attempt to find evidence of personality types that could be used to classify player behavior as well as generate probabilistic bot behavior. Expectation Maximization seeded with players' self-assessments for their motivations was run for two different model types - Naive Bayes and a trigram model - to uncover three clusters each. For both model structures, most players were classified as belonging to a single large cluster that combined the goals of splashy plays, clever combos, and effective play, cross-cutting the original categories - a cautionary tale for research that assumes players can be classified into one category or another. However, subjects qualitatively report that the different model structures play very differently, with the Naive Bayes model more creatively combining cards." ] }
1811.11273
2903008052
We present a method for encoding game logs as numeric features in the card game Dominion. We then run the manifold learning algorithm t-SNE on these encodings to visualize the landscape of player strategies. By quantifying game states as the relative prevalence of cards in a player's deck, we create visualizations that capture qualitative differences in player strategies. Different ways of deviating from the starting game state appear as different rays in the visualization, giving it an intuitive explanation. This is a promising new direction for understanding player strategies across games that vary in length.
Gold's paper proposed two specific models for predicting card buys: a trigram model based on two previous buys, and a naive Bayes model, based on all cards currently in the player's deck. The author notes that the natural follow-up experiment should be to determine whether EM can assign players to one model structure or other based on how well the models capture player behavior" @cite_0 . Thus Gold proposed clustering players on which factors inform their decision making.
{ "cite_N": [ "@cite_0" ], "mid": [ "2210286836" ], "abstract": [ "Probabilistic models were fit to logs of player actions in the card game Dominion in an attempt to find evidence of personality types that could be used to classify player behavior as well as generate probabilistic bot behavior. Expectation Maximization seeded with players' self-assessments for their motivations was run for two different model types - Naive Bayes and a trigram model - to uncover three clusters each. For both model structures, most players were classified as belonging to a single large cluster that combined the goals of splashy plays, clever combos, and effective play, cross-cutting the original categories - a cautionary tale for research that assumes players can be classified into one category or another. However, subjects qualitatively report that the different model structures play very differently, with the Naive Bayes model more creatively combining cards." ] }
1811.11387
2926645869
The success of deep neural networks generally requires a vast amount of training data to be labeled, which is expensive and unfeasible in scale, especially for video collections. To alleviate this problem, in this paper, we propose 3DRotNet: a fully self-supervised approach to learn spatiotemporal features from unlabeled videos. A set of rotations are applied to all videos, and a pretext task is defined as prediction of these rotations. When accomplishing this task, 3DRotNet is actually trained to understand the semantic concepts and motions in videos. In other words, it learns a spatiotemporal video representation, which can be transferred to improve video understanding tasks in small datasets. Our extensive experiments successfully demonstrate the effectiveness of the proposed framework on action recognition, leading to significant improvements over the state-of-the-art self-supervised methods. With the self-supervised pre-trained 3DRotNet from large datasets, the recognition accuracy is boosted up by 20.4 on UCF101 and 16.7 on HMDB51 respectively, compared to the models trained from scratch.
Although there are some work about self-supervised learning from videos, most of them still employed 2DConvNets to learn image representations by using temporal information in videos as the supervision information. Pathak proposed to train a 2DConvNet to segment moving objects that unsupervised segmented from videos @cite_17 . Misra proposed to train a 2DConvNet to verify whether a sequence of frames is in a correct temporal order @cite_21 . Wang and Gupta proposed a Siamese-triplet network with a ranking loss to train a 2DConvNet with the patches from a video sequence @cite_38 . Fernando proposed to learn the video representation by odd-one-out networks to identify the odd element from a set of related elements with a 2DConvNet @cite_27 . Lee proposed to take shuffled frame sequence as input to a 2DConvNet to sort the sequences @cite_16 . In addition, LSTM can also be used to learn the visual features from videos especially to model the temporal information among frames @cite_3 @cite_12 .
{ "cite_N": [ "@cite_38", "@cite_21", "@cite_3", "@cite_27", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "219040644", "2487442924", "", "2950809610", "2743563068", "", "2575671312" ], "abstract": [ "Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.", "In this paper, we present an approach for learning a visual representation from the raw spatiotemporal signals in videos. Our representation is learned without supervision from semantic labels. We formulate our method as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful visual representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. To demonstrate its sensitivity to human pose, we show results for pose estimation on the FLIC and MPII datasets that are competitive, or better than approaches using significantly more supervision. Our method can be combined with supervised representations to provide an additional boost in accuracy.", "", "We propose a new self-supervised CNN pre-training technique based on a novel auxiliary task called \"odd-one-out learning\". In this task, the machine is asked to identify the unrelated or odd element from a set of otherwise related elements. We apply this technique to self-supervised video representation learning where we sample subsequences from videos and ask the network to learn to predict the odd video subsequence. The odd video subsequence is sampled such that it has wrong temporal order of frames while the even ones have the correct temporal order. Therefore, to generate a odd-one-out question no manual annotation is required. Our learning machine is implemented as multi-stream convolutional neural network, which is learned end-to-end. Using odd-one-out networks, we learn temporal representations for videos that generalizes to other related tasks such as action recognition. On action classification, our method obtains 60.3 on the UCF101 dataset using only UCF101 data for training which is approximately 10 better than current state-of-the-art self-supervised learning methods. Similarly, on HMDB51 dataset we outperform self-supervised state-of-the art methods by 12.7 on action classification task.", "We present an unsupervised representation learning approach using videos without semantic labels. We leverage the temporal coherence as a supervisory signal by formulating representation learning as a sequence sorting task. We take temporally shuffled frames (i.e., in non-chronological order) as inputs and train a convolutional neural network to sort the shuffled sequences. Similar to comparison-based sorting algorithms, we propose to extract features from all frame pairs and aggregate them to predict the correct order. As sorting shuffled image sequence requires an understanding of the statistical temporal structure of images, training with such a proxy task allows us to learn rich and generalizable visual representation. We validate the effectiveness of the learned representation using our method as pre-training on high-level recognition problems. The experimental results show that our method compares favorably against state-of-the-art methods on action recognition, image classification and object detection tasks.", "", "This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce." ] }
1811.11325
2902915360
Across a majority of modern learning-based tracking systems, expensive annotations are needed to achieve state-of-the-art performance. In contrast, the Lucas-Kanade (LK) algorithm works well without any annotation. However, LK has a strong assumption of photometric (brightness) consistency on image intensity and is easy to drift because of large motion, occlusion, and aperture problem. To relax the assumption and alleviate the drift problem, we propose CyLKs, a data-driven way of training Lucas-Kanade in an unsupervised manner. CyLKs learns a feature transformation through CNNs, transforming the input images to a feature space which is especially favorable to LK tracking. During training, we perform differentiable Lucas-Kanade forward and backward on the convolutional feature maps, and then minimize the re-projection error. During testing, we perform the LK tracking on the learned features. We apply our model to the task of landmark tracking and perform experiments on datasets of THUMOS and 300VW.
Direct methods, mostly based on Lucas-Kanade algorithm @cite_16 , operate on pixel intensity to estimate the motion between images. Such methods are computationally efficient and have been proved to achieve competitive results in tasks of SLAM @cite_15 @cite_17 and visual odometry @cite_11 . However, direct methods assume photometric consistency across frames and are thus not robust to variations of illumination changes, occlusion, and out-of-plane motion.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_11", "@cite_17" ], "mid": [ "612478963", "2035379092", "1970504153", "" ], "abstract": [ "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.", "Since the Lucas-Kanade algorithm was proposed in 1981 image alignment has become one of the most widely used techniques in computer vision. Applications range from optical flow and tracking to layered motion, mosaic construction, and face coding. Numerous algorithms have been proposed and a wide variety of extensions have been made to the original formulation. We present an overview of image alignment, describing most of the algorithms and their extensions in a consistent framework. We concentrate on the inverse compositional algorithm, an efficient algorithm that we recently proposed. We examine which of the extensions to Lucas-Kanade can be used with the inverse compositional algorithm without any significant loss of efficiency, and which cannot. In this paper, Part 1 in a series of papers, we cover the quantity approximated, the warp update rule, and the gradient descent approximation. In future papers, we will cover the choice of the error function, how to allow linear appearance variation, and how to impose priors on the parameters.", "We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software.", "" ] }
1811.11325
2902915360
Across a majority of modern learning-based tracking systems, expensive annotations are needed to achieve state-of-the-art performance. In contrast, the Lucas-Kanade (LK) algorithm works well without any annotation. However, LK has a strong assumption of photometric (brightness) consistency on image intensity and is easy to drift because of large motion, occlusion, and aperture problem. To relax the assumption and alleviate the drift problem, we propose CyLKs, a data-driven way of training Lucas-Kanade in an unsupervised manner. CyLKs learns a feature transformation through CNNs, transforming the input images to a feature space which is especially favorable to LK tracking. During training, we perform differentiable Lucas-Kanade forward and backward on the convolutional feature maps, and then minimize the re-projection error. During testing, we perform the LK tracking on the learned features. We apply our model to the task of landmark tracking and perform experiments on datasets of THUMOS and 300VW.
Instead of working on raw images, feature-based methods extract robust features and estimate motion by matching feature descriptors between images. Robust features such as SIFT @cite_6 and ORB @cite_18 are usually used. Without assuming photometric consistency, feature-based methods are more robust to illumination changes. However, the performance of feature-based methods heavily rely on the localization capabilities and matching the accuracy of the features.
{ "cite_N": [ "@cite_18", "@cite_6" ], "mid": [ "2117228865", "2151103935" ], "abstract": [ "Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance." ] }
1811.11325
2902915360
Across a majority of modern learning-based tracking systems, expensive annotations are needed to achieve state-of-the-art performance. In contrast, the Lucas-Kanade (LK) algorithm works well without any annotation. However, LK has a strong assumption of photometric (brightness) consistency on image intensity and is easy to drift because of large motion, occlusion, and aperture problem. To relax the assumption and alleviate the drift problem, we propose CyLKs, a data-driven way of training Lucas-Kanade in an unsupervised manner. CyLKs learns a feature transformation through CNNs, transforming the input images to a feature space which is especially favorable to LK tracking. During training, we perform differentiable Lucas-Kanade forward and backward on the convolutional feature maps, and then minimize the re-projection error. During testing, we perform the LK tracking on the learned features. We apply our model to the task of landmark tracking and perform experiments on datasets of THUMOS and 300VW.
With the superior representation capabilities of the CNNs, many CNN-based tracking methods outperform the unsupervised tracking methods. @cite_21 proposed GOTURN which applies a deep regression network to predict object locations based on deep features. @cite_22 proposed a classification-based multi-domain tracker, which try to separate the domain-independent information from domain-specific one, to capture shared representations to some extent. C-COT @cite_9 introduced the concept of multi-resolution fusion and continuous domain learning for the visual tracking system to achieve accurate sub-pixel feature point tracking. ECO @cite_2 proposed a factorized convolution operator to reduce the number of parameters and an efficient model update strategy, and achieve significant improvement in both speed and robustness. @cite_4 designed a two-stream CNN to handle drastic appearance change and distinguish target object from its similar distracters during tracking. @cite_19 set up a CNN architecture for simultaneous detection and tracking, and introduced the correlation features to represent object co-occurrences across time to aid tracking.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_9", "@cite_21", "@cite_19", "@cite_2" ], "mid": [ "2211629196", "2950410377", "2518013266", "2964253307", "2962855257", "2557641257" ], "abstract": [ "We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly.", "We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking ground-truths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify the target in each domain. We train the network with respect to each domain iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance compared with state-of-the-art methods in existing tracking benchmarks.", "Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data b ...", "Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker’s state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker (Our tracker is available at http: davheld.github.io GOTURN GOTURN.html) is the first neural-network tracker that learns to track generic objects at 100 fps.", "Recent approaches for high accuracy detection and tracking of object categories in video consist of complex multistage solutions that become more cumbersome each year. In this paper we propose a ConvNet architecture that jointly performs detection and tracking, solving the task in a simple and effective way. Our contributions are threefold: (i) we set up a ConvNet architecture for simultaneous detection and tracking, using a multi-task objective for frame-based object detection and across-frame track regression; (ii) we introduce correlation features that represent object co-occurrences across time to aid the ConvNet during tracking; and (iii) we link the frame level detections based on our across-frame tracklets to produce high accuracy detections at the video level. Our ConvNet architecture for spatiotemporal object detection is evaluated on the large-scale ImageNet VID dataset where it achieves state-of-the-art results. Our approach provides better single model performance than the winning method of the last ImageNet challenge while being conceptually much simpler. Finally, we show that by increasing the temporal stride we can dramatically increase the tracker speed.", "In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model, (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples, (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0 relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0 AUC on OTB-2015." ] }
1811.11141
2903527042
Distributed synchronous stochastic gradient descent has been widely used to train deep neural networks on computer clusters. With the increase of computational power, network communications have become one limiting factor on system scalability. In this paper, we observe that many deep neural networks have a large number of layers with only a small amount of data to be communicated. Based on the fact that merging some short communication tasks into a single one may reduce the overall communication time, we formulate an optimization problem to minimize the training iteration time. We develop an optimal solution named merged-gradient WFBP (MG-WFBP) and implement it in our open-source deep learning platform B-Caffe. Our experimental results on an 8-node GPU cluster with 10GbE interconnect and trace-based simulation results on a 64-node cluster both show that the MG-WFBP algorithm can achieve much better scaling efficiency than existing methods WFBP and SyncEASGD.
In the HPC community, the MPI data communication collectives have been redesigned for distributed training to improve the communication performance across multiple machines @cite_15 . Many MPI-like implementations, such as OpenMPI https: www.open-mpi.org , NCCL2 https: developer.nvidia.com nccl , Gloo https: github.com facebookincubator gloo and MVAPICH2-GDR https: mvapich.cse.ohio-state.edu , support efficient CUDA-aware communication between GPUs via network, and many state-of-the-art deep learning frameworks (e.g., TensorFlow, Caffe2 and CNTK) integrate NCCL2 or Gloo for their distributed training modules. Even though these libraries provide very efficient communication collectives, the data communication would still become bottleneck when the communication-to-computation ratio is high, and S-SGD does not scale very well.
{ "cite_N": [ "@cite_15" ], "mid": [ "2580688187" ], "abstract": [ "Availability of large data sets like ImageNet and massively parallel computation support in modern HPC devices like NVIDIA GPUs have fueled a renewed interest in Deep Learning (DL) algorithms. This has triggered the development of DL frameworks like Caffe, Torch, TensorFlow, and CNTK. However, most DL frameworks have been limited to a single node. In order to scale out DL frameworks and bring HPC capabilities to the DL arena, we propose, S-Caffe; a scalable and distributed Caffe adaptation for modern multi-GPU clusters. With an in-depth analysis of new requirements brought forward by the DL frameworks and limitations of current communication runtimes, we present a co-design of the Caffe framework and the MVAPICH2-GDR MPI runtime. Using the co-design methodology, we modify Caffe's workflow to maximize the overlap of computation and communication with multi-stage data propagation and gradient aggregation schemes. We bring DL-Awareness to the MPI runtime by proposing a hierarchical reduction design that benefits from CUDA-Aware features and provides up to a massive 133x speedup over OpenMPI and 2.6x speedup over MVAPICH2 for 160 GPUs. S-Caffe successfully scales up to 160 K-80 GPUs for GoogLeNet (ImageNet) with a speedup of 2.5x over 32 GPUs. To the best of our knowledge, this is the first framework that scales up to 160 GPUs. Furthermore, even for single node training, S-Caffe shows an improvement of 14 and 9 over Nvidia's optimized Caffe for 8 and 16 GPUs, respectively. In addition, S-Caffe achieves up to 1395 samples per second for the AlexNet model, which is comparable to the performance of Microsoft CNTK." ] }
1811.11262
2952314908
The paper presents a topology-agnostic greedy protocol for network-on-chip routing. The proposed routing algorithm can tolerate any number of permanent faults, and is proven to be deadlock-free. We introduce a specialized variant of the algorithm, which is optimized for 2D mesh networks, both flat and wireless. The adaptiveness and minimality of several variants this algorithm are analyzed through graph-based simulations.
An overview of fault tolerant routing techniques in the context of network-on-chip is provided by @cite_19 . In the context of our work, we provide a short overview of the main techniques which offer online fault tolerance, i.e. the techniques which are able to deal with failures which occur after a chip has left the factory. We present these techniques in order of increasing complexity.
{ "cite_N": [ "@cite_19" ], "mid": [ "2057785859" ], "abstract": [ "Networks-on-Chip constitute the interconnection architecture of future, massively parallel multiprocessors that assemble hundreds to thousands of processing cores on a single chip. Their integration is enabled by ongoing miniaturization of chip manufacturing technologies following Moore's Law. It comes with the downside of the circuit elements' increased susceptibility to failure. Research on fault-tolerant Networks-on-Chip tries to mitigate partial failure and its effect on network performance and reliability by exploiting various forms of redundancy at the suitable network layers. The article at hand reviews the failure mechanisms, fault models, diagnosis techniques, and fault-tolerance methods in on-chip networks, and surveys and summarizes the research of the last ten years. It is structured along three communication layers: the data link, the network, and the transport layers. The most important results are summarized and open research problems and challenges are highlighted to guide future research on this topic." ] }
1811.11262
2952314908
The paper presents a topology-agnostic greedy protocol for network-on-chip routing. The proposed routing algorithm can tolerate any number of permanent faults, and is proven to be deadlock-free. We introduce a specialized variant of the algorithm, which is optimized for 2D mesh networks, both flat and wireless. The adaptiveness and minimality of several variants this algorithm are analyzed through graph-based simulations.
A first category of algorithms employ face routing based techniques to find paths around areas with faulty components. Maze routing is an important example of these techniques @cite_10 . An important advantage of such methods is that recent information on the status of each link node is required only by its neighbouring nodes, thus resulting in a low reconfiguration overhead. Furthermore, these methods provide full fault coverage: a route will be found whenever a path from a source node to its destination exists. The main disadvantage is that these methods are limited to planar graph topologies and cannot easily be extended to other topologies, such as 3D NoCs, wireless NoCs or a torus topology. Additionally, when a packet encounters an area with faulty components, the lack of information on the shape of this area may result in the message being routed along the border in a suboptimal direction, thus significantly increasing the path length.
{ "cite_N": [ "@cite_10" ], "mid": [ "1989098385" ], "abstract": [ "This paper introduces a new, practical routing algorithm, Maze-routing, to tolerate faults in network-on-chips. The algorithm is the first to provide all of the following properties at the same time: 1) fully-distributed with no centralized component, 2) guaranteed delivery (it guarantees to deliver packets when a path exists between nodes, or otherwise indicate that destination is unreachable, while being deadlock and livelock free), 3) low area cost, 4) low reconfiguration overhead upon a fault. To achieve all these properties, we propose Maze-routing, a new variant of face routing in on-chip networks and make use of deflections in routing. Our evaluations show that Maze-routing has 16X less area overhead than other algorithms that provide guaranteed delivery. Our Maze-routing algorithm is also high performance: for example, when up to 5 links are broken, it provides 50 higher saturation throughput compared to the state-of-the-art." ] }
1811.11262
2952314908
The paper presents a topology-agnostic greedy protocol for network-on-chip routing. The proposed routing algorithm can tolerate any number of permanent faults, and is proven to be deadlock-free. We introduce a specialized variant of the algorithm, which is optimized for 2D mesh networks, both flat and wireless. The adaptiveness and minimality of several variants this algorithm are analyzed through graph-based simulations.
Another important technique uses fault regions @cite_16 . This method applies to n-dimensional mesh networks, in which one or multiple faults are grouped in one or more rectangular non-overlapping regions. When encountering such a region, the protocol routes along the border of the fault region. The main issue with this technique is that the rectangular fault regions typically contain some healthy nodes as well. The healthy nodes on the inside of a fault region such a cannot receive messages and thus must be turned off, which results in a bad resource utilization.
{ "cite_N": [ "@cite_16" ], "mid": [ "2127879369" ], "abstract": [ "Constructing 2D mesh topology network on chips (NoCs) without using virtual channels becomes attractive approach to building future massive multi-core computer systems because of its large amount of bandwidths, less design complexity, and less space consumption of routers. Dead lock problem on NoC is critical because it makes data transmission between nodes unreachable, and inevitable failures in hardware make mesh topology irregular. Although several fault-tolerant techniques are available, deadlock-free routing control algorithm for irregular mesh topology is promising approach to utilize large amount of bandwidths of NoC. The main drawback of available routing control algorithms is that many healthy nodes are deactivated to guarantee deadlock-freeness, and a number of deactivated nodes lead to traffic congestion. In this paper, we propose new fault-tolerant routing algorithm on 2D mesh topology NoC constructed without using virtual channels. The proposed algorithm is fully analyzed its dead lock-freeness, and the experimental result shows that the proposed algorithm can achieve both less number of deactivated nodes and higher throughput." ] }
1811.11262
2952314908
The paper presents a topology-agnostic greedy protocol for network-on-chip routing. The proposed routing algorithm can tolerate any number of permanent faults, and is proven to be deadlock-free. We introduce a specialized variant of the algorithm, which is optimized for 2D mesh networks, both flat and wireless. The adaptiveness and minimality of several variants this algorithm are analyzed through graph-based simulations.
A subcategory of routing schemes in which the full graph topology is processed employs spanning trees to ensure that the routing process remains deadlock free. An overview of these methods is given by @cite_18 . The fundamental algorithm in this category is up* down* routing @cite_9 . In this method, a spanning tree is constructed over the network, and the up' or down' direction is assigned to each arc in the network (including the ones that are not part of the spanning tree). The directions are assigned in such a way that there is no cycle containing exclusively up' or exclusively down' arcs. To guarantee deadlock freeness, shortest paths between a source and destination vertex are searched under the restriction that no up edge can follow after a down edge (i.e. the down-up' turn is forbidden). These path constraints were relaxed in @cite_2 , which uses uses a more fine grained categorization of edges, in which only few specific turns are prohibited.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_2" ], "mid": [ "2124906592", "2025879446", "2142007322" ], "abstract": [ "Autonet is a self-configuring local area network composed of switches interconnected by 100 Mb s, full-duplex, point-to-point links. The switches contain 12 ports that are internally connected by a full crossbar. Switches use cut-through to achieve a packet forwarding latency as low as 2 ms switch. Any switch port can be cabled to any other switch port or to a host network controller. A processor in each switch monitors the network's physical configuration. A distributed algorithm running on the switch processor computes the routes packets are to follow and fills in the packet forwarding table in each switch. With Autonet, distinct paths through the set of network links can carry packets in parallel, allowing many pairs of hosts to communicate simultaneously at full link bandwidth. A 30-switch network with more than 100 hosts has been the service network for Digital's Systems Research Center since February 1990. >", "Most standard cluster interconnect technologies are flexible with respect to network topology. This has spawned a substantial amount of research on topology-agnostic routing algorithms, which make no assumption about the network structure, thus providing the flexibility needed to route on irregular networks. Actually, such an irregularity should be often interpreted as minor modifications of some regular interconnection pattern, such as those induced by faults. In fact, topology-agnostic routing algorithms are also becoming increasingly useful for networks on chip (NoCs), where faults may make the preferred 2D mesh topology irregular. Existing topology-agnostic routing algorithms were developed for varying purposes, giving them different and not always comparable properties. Details are scattered among many papers, each with distinct conditions, making comparison difficult. This paper presents a comprehensive overview of the known topology-agnostic routing algorithms. We classify these algorithms by their most important properties, and evaluate them consistently. This provides significant insight into the algorithms and their appropriateness for different on- and off-chip environments.", "System area networks (SANs), which usually accept arbitrary topologies, have been used to connect hosts in PC clusters. Although deadlock-free routing is often employed for low-latency communications using wormhole or virtual cut-through switching, the interconnection adaptivity introduces difficulties in establishing deadlock-free paths. An up* down* routing algorithm, which has been widely used to avoid deadlocks in irregular networks, tends to make unbalanced paths as it employs a one-dimensional directed graph. The current study introduces a two-dimensional directed graph on which adaptive routings called left-up first turn (L-turn) routings and right-down last turn (R-turn) routings are proposed to make the paths as uniformly distributed as possible. This scheme guarantees deadlock-freedom because it uses the turn model approach, and the extra degree of freedom in the two-dimensional graph helps to ensure that the prohibited turns are well-distributed. Simulation results show that better throughput and latency results from uniformly distributing the prohibited turns by which the traffic would be more distributed toward the leaf nodes. The L-turn routings, which meet this condition, improve throughput by up to 100 percent compared with two up* down*-based routings, and also reduce latency" ] }
1811.11205
2903120399
The concept of conditional computation for deep nets has been proposed previously to improve model performance by selectively using only parts of the model conditioned on the sample it is processing. In this paper, we investigate input-dependent dynamic filter selection in deep convolutional neural networks (CNNs). The problem is interesting because the idea of forcing different parts of the model to learn from different types of samples may help us acquire better filters in CNNs, improve the model generalization performance and potentially increase the interpretability of model behavior. We propose a novel yet simple framework called GaterNet, which involves a backbone and a gater network. The backbone network is a regular CNN that performs the major computation needed for making a prediction, while a global gater network is introduced to generate binary gates for selectively activating filters in the backbone network based on each input. Extensive experiments on CIFAR and ImageNet datasets show that our models consistently outperform the original models with a large margin. On CIFAR-10, our model also improves upon state-of-the-art results.
The concept of conditional computation is first discussed by Bengio in @cite_4 . Early works on conditional computation focus on how to select model components on the fly. have studied four approaches for learning stochastic neurons in fully-connected neural networks for conditional selection in @cite_14 . On the other hand, Davis and Arel have used low-rank approximations to predict the sparse activations of neurons at each layer @cite_13 . have also tested reinforcement learning to optimize conditional computation policies @cite_12 .
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_4", "@cite_12" ], "mid": [ "1549825062", "2242818861", "2951163624", "2179423374" ], "abstract": [ "Scalability properties of deep neural networks raise key research questions, particularly as the problems considered become larger and more challenging. This paper expands on the idea of conditional computation introduced by Bengio, et. al., where the nodes of a deep network are augmented by a set of gating units that determine when a node should be calculated. By factorizing the weight matrix into a low-rank approximation, an estimation of the sign of the pre-nonlinearity activation can be efficiently obtained. For networks using rectified-linear hidden units, this implies that the computation of a hidden unit with an estimated negative pre-nonlinearity can be ommitted altogether, as its value will become zero when nonlinearity is applied. For sparse neural networks, this can result in considerable speed gains. Experimental results using the MNIST and SVHN data sets with a fully-connected deep neural network demonstrate the performance robustness of the proposed scheme with respect to the error introduced by the conditional computation process.", "Stochastic neurons and hard non-linearities can be useful for a number of reasons in deep learning models, but in many cases they pose a challenging problem: how to estimate the gradient of a loss function with respect to the input of such stochastic or non-smooth neurons? I.e., can we \"back-propagate\" through these stochastic neurons? We examine this question, existing approaches, and compare four families of solutions, applicable in different settings. One of them is the minimum variance unbiased gradient estimator for stochatic binary neurons (a special case of the REINFORCE algorithm). A second approach, introduced here, decomposes the operation of a binary stochastic neuron into a stochastic binary part and a smooth differentiable part, which approximates the expected effect of the pure stochatic binary neuron to first order. A third approach involves the injection of additive or multiplicative noise in a computational graph that is otherwise differentiable. A fourth approach heuristically copies the gradient with respect to the stochastic output directly as an estimator of the gradient with respect to the sigmoid argument (we call this the straight-through estimator). To explore a context where these estimators are useful, we consider a small-scale version of conditional computation , where sparse stochastic units form a distributed representation of gaters that can turn off in combinatorially many ways large chunks of the computation performed in the rest of the neural network. In this case, it is important that the gating units produce an actual 0 most of the time. The resulting sparsity can be potentially be exploited to greatly reduce the computational cost of large deep networks for which conditional computation would be useful.", "Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these challenges.", "Deep learning has become the state-of-art tool in many applications, but the evaluation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (, 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. More specifically, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a policy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversification of the dropout policy. We present encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation." ] }
1811.11205
2903120399
The concept of conditional computation for deep nets has been proposed previously to improve model performance by selectively using only parts of the model conditioned on the sample it is processing. In this paper, we investigate input-dependent dynamic filter selection in deep convolutional neural networks (CNNs). The problem is interesting because the idea of forcing different parts of the model to learn from different types of samples may help us acquire better filters in CNNs, improve the model generalization performance and potentially increase the interpretability of model behavior. We propose a novel yet simple framework called GaterNet, which involves a backbone and a gater network. The backbone network is a regular CNN that performs the major computation needed for making a prediction, while a global gater network is introduced to generate binary gates for selectively activating filters in the backbone network based on each input. Extensive experiments on CIFAR and ImageNet datasets show that our models consistently outperform the original models with a large margin. On CIFAR-10, our model also improves upon state-of-the-art results.
More recently, have investigated the combination of conditional computation with Mixture of Experts on language modeling and machine translation tasks @cite_19 . At each time step in the sequence model, they dynamically select a small subset of experts to process the input. Their models significantly outperformed state-of-the-art models with a low computation cost. In the same vein, have proposed HydraNets that uses multiple branches of networks for extracting features @cite_21 . In this work, a gating module is introduced to generate decisions on selecting branches for each specific input. This method requires a pre-processing step of clustering the ground-truth classes to force each branch to learn features for a specific cluster of classes as discussed in the introduction.
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "2581624817", "2798722023" ], "abstract": [ "The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.", "There is growing interest in improving the design of deep network architectures to be both accurate and low cost. This paper explores semantic specialization as a mechanism for improving the computational efficiency (accuracy-per-unit-cost) of inference in the context of image classification. Specifically, we propose a network architecture template called HydraNet, which enables state-of-the-art architectures for image classification to be transformed into dynamic architectures which exploit conditional execution for efficient inference. HydraNets are wide networks containing distinct components specialized to compute features for visually similar classes, but they retain efficiency by dynamically selecting only a small number of components to evaluate for any one input image. This design is made possible by a soft gating mechanism that encourages component specialization during training and accurately performs component selection during inference. We evaluate the HydraNet approach on both the CIFAR-100 and ImageNet classification tasks. On CIFAR, applying the HydraNet template to the ResNet and DenseNet family of models reduces inference cost by 2-4A— while retaining the accuracy of the baseline architectures. On ImageNet, applying the HydraNet template improves accuracy up to 2.5 when compared to an efficient baseline architecture with similar inference cost." ] }
1811.11205
2903120399
The concept of conditional computation for deep nets has been proposed previously to improve model performance by selectively using only parts of the model conditioned on the sample it is processing. In this paper, we investigate input-dependent dynamic filter selection in deep convolutional neural networks (CNNs). The problem is interesting because the idea of forcing different parts of the model to learn from different types of samples may help us acquire better filters in CNNs, improve the model generalization performance and potentially increase the interpretability of model behavior. We propose a novel yet simple framework called GaterNet, which involves a backbone and a gater network. The backbone network is a regular CNN that performs the major computation needed for making a prediction, while a global gater network is introduced to generate binary gates for selectively activating filters in the backbone network based on each input. Extensive experiments on CIFAR and ImageNet datasets show that our models consistently outperform the original models with a large margin. On CIFAR-10, our model also improves upon state-of-the-art results.
Dynamic network configuration is another type of conditional computation that has been studied previously. In this line of works, no parallel experts are explicitly defined. Instead, they dynamically configure a single network by selectively activating model components such as units and layers for each input. Adaptive Dropout is proposed by Ba and Frey to dynamically learn a dropout rate for each unit and each input @cite_2 . Denoyer and Ludovic have proposed a tree structure neural network called Deep Sequential Neural Network @cite_8 . A path from the root to a leaf node in the tree represents a computation sequence for the input, which is also dynamically determined for each input. Recently, Veit and Belongie @cite_5 have proposed to skip layers in ResNet @cite_7 in an input-dependent manner. The resulting model is performing better and also more robust to adversarial attack than the original ResNet, which also leads to reduced computation cost.
{ "cite_N": [ "@cite_8", "@cite_5", "@cite_7", "@cite_2" ], "mid": [ "2165141262", "2884751099", "2949650786", "2136836265" ], "abstract": [ "Neural Networks sequentially build high-level features through their successive layers. We propose here a new neural network model where each layer is associated with a set of candidate mappings. When an input is processed, at each layer, one mapping among these candidates is selected according to a sequential decision process. The resulting model is structured according to a DAG like architecture, so that a path from the root to a leaf node defines a sequence of transformations. Instead of considering global transformations, like in classical multilayer networks, this model allows us for learning a set of local transformations. It is thus able to process data with different characteristics through specific sequences of such local transformations, increasing the expression power of this model w.r.t a classical multilayered network. The learning algorithm is inspired from policy gradient techniques coming from the reinforcement learning domain and is used here instead of the classical back-propagation based gradient descent techniques. Experiments on different datasets show the relevance of this approach.", "Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish fine-grained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using (20 ) and (33 ) less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Recently, it was shown that deep neural networks can perform very well if the activities of hidden units are regularized during learning, e.g, by randomly dropping out 50 of their activities. We describe a method called 'standout' in which a binary belief network is overlaid on a neural network and is used to regularize of its hidden units by selectively setting activities to zero. This 'adaptive dropout network' can be trained jointly with the neural network by approximately computing local expectations of binary dropout variables, computing derivatives using back-propagation, and using stochastic gradient descent. Interestingly, experiments show that the learnt dropout network parameters recapitulate the neural network parameters, suggesting that a good dropout network regularizes activities according to magnitude. When evaluated on the MNIST and NORB datasets, we found that our method achieves lower classification error rates than other feature learning methods, including standard dropout, denoising auto-encoders, and restricted Boltzmann machines. For example, our method achieves 0.80 and 5.8 errors on the MNIST and NORB test sets, which is better than state-of-the-art results obtained using feature learning methods, including those that use convolutional architectures." ] }
1811.10999
2950353181
Aspect-level sentiment classification (ASC) aims at identifying sentiment polarities towards aspects in a sentence, where the aspect can behave as a general Aspect Category (AC) or a specific Aspect Term (AT). However, due to the especially expensive and labor-intensive labeling, existing public corpora in AT-level are all relatively small. Meanwhile, most of the previous methods rely on complicated structures with given scarce data, which largely limits the efficacy of the neural models. In this paper, we exploit a new direction named coarse-to-fine task transfer, which aims to leverage knowledge learned from a rich-resource source domain of the coarse-grained AC task, which is more easily accessible, to improve the learning in a low-resource target domain of the fine-grained AT task. To resolve both the aspect granularity inconsistency and feature mismatch between domains, we propose a Multi-Granularity Alignment Network (MGAN). In MGAN, a novel Coarse2Fine attention guided by an auxiliary task can help the AC task modeling at the same fine-grained level with the AT task. To alleviate the feature false alignment, a contrastive feature alignment method is adopted to align aspect-specific feature representations semantically. In addition, a large-scale multi-domain dataset for the AC task is provided. Empirically, extensive experiments demonstrate the effectiveness of the MGAN.
Traditional supervised learning algorithms highly depend on extensive handcrafted features to solve aspect-level sentiment classification @cite_22 @cite_26 . These models fail to capture semantic relatedness between the aspect and its context. To overcome this issue, the attention mechanism, which has been successfully applied in many NLP tasks @cite_1 @cite_25 @cite_32 @cite_27 , can help the model explicitly capture intrinsic aspect-context association @cite_11 @cite_20 @cite_17 @cite_6 @cite_3 @cite_14 @cite_4 . However, most of these methods highly rely on data-driven RNNs or tailor-made structures to deal with complicated cases, which requires substantial AT-level data to train effective neural models. Different from them, the proposed model can highly benefit from useful knowledge learned from a related abundant domain of the AC-level task.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_22", "@cite_4", "@cite_1", "@cite_32", "@cite_17", "@cite_6", "@cite_3", "@cite_27", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2252057809", "2788810909", "", "", "2133564696", "", "", "2740899359", "2757541972", "", "2951008357", "2412751481", "2529550020" ], "abstract": [ "Reviews depict sentiments of customers towards various aspects of a product or service. Some of these aspects can be grouped into coarser aspect categories. SemEval-2014 had a shared task (Task 4) on aspect-level sentiment analysis, with over 30 teams participated. In this paper, we describe our submissions, which stood first in detecting aspect categories, first in detecting sentiment towards aspect categories, third in detecting aspect terms, and first and second in detecting sentiment towards aspect terms in the laptop and restaurant domains, respectively.", "", "", "", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "", "", "Aspect-level sentiment classification aims at identifying the sentiment polarity of specific target in its context. Previous approaches have realized the importance of targets in sentiment classification and developed various methods with the goal of precisely modeling their contexts via generating target-specific representations. However, these studies always ignore the separate modeling of targets. In this paper, we argue that both targets and contexts deserve special treatment and need to be learned their own representations via interactive learning. Then, we propose the interactive attention networks (IAN) to interactively learn attentions in the contexts and targets, and generate the representations for targets and contexts separately. With this design, the IAN model can well represent a target and its collocative context, which is helpful to sentiment classification. Experimental results on SemEval 2014 Datasets demonstrate the effectiveness of our model.", "", "", "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.", "We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation.", "Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence. Different context words have different influences on determining the sentiment polarity of a sentence towards the target. Therefore, it is desirable to integrate the connections between target word and context words when building a learning system. In this paper, we develop two target dependent long short-term memory (LSTM) models, where target information is automatically taken into account. We evaluate our methods on a benchmark dataset from Twitter. Empirical results show that modeling sentence representation with standard LSTM does not perform well. Incorporating target information into LSTM can significantly boost the classification accuracy. The target-dependent LSTM models achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons." ] }
1811.10999
2950353181
Aspect-level sentiment classification (ASC) aims at identifying sentiment polarities towards aspects in a sentence, where the aspect can behave as a general Aspect Category (AC) or a specific Aspect Term (AT). However, due to the especially expensive and labor-intensive labeling, existing public corpora in AT-level are all relatively small. Meanwhile, most of the previous methods rely on complicated structures with given scarce data, which largely limits the efficacy of the neural models. In this paper, we exploit a new direction named coarse-to-fine task transfer, which aims to leverage knowledge learned from a rich-resource source domain of the coarse-grained AC task, which is more easily accessible, to improve the learning in a low-resource target domain of the fine-grained AT task. To resolve both the aspect granularity inconsistency and feature mismatch between domains, we propose a Multi-Granularity Alignment Network (MGAN). In MGAN, a novel Coarse2Fine attention guided by an auxiliary task can help the AC task modeling at the same fine-grained level with the AT task. To alleviate the feature false alignment, a contrastive feature alignment method is adopted to align aspect-specific feature representations semantically. In addition, a large-scale multi-domain dataset for the AC task is provided. Empirically, extensive experiments demonstrate the effectiveness of the MGAN.
Existing domain adaptation tasks for sentiment analysis focus on traditional sentiment classification without considering the aspect @cite_12 @cite_0 @cite_34 @cite_29 @cite_2 @cite_13 @cite_8 @cite_10 . In terms of data scarcity and the value of task, transfer learning is more urgent for aspect-level sentiment analysis that characterizes users different preferences. To the best of our knowledge, only a few studies have explored to transfer from a single aspect category to another in a same domain based on adversarial training @cite_7 . Different from that, we explore a motivated and challenging setting which aims to transfer cross both aspect granularity and domain.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_10", "@cite_29", "@cite_0", "@cite_2", "@cite_34", "@cite_13", "@cite_12" ], "mid": [ "2950455007", "", "", "2949821452", "2153353890", "", "22861983", "2567698949", "2163302275" ], "abstract": [ "We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27 on a pathology dataset and 5 on a review dataset.", "", "", "Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters ? in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB^ TM , significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks.", "Sentiment classification aims to automatically predict sentiment polarity (e.g., positive or negative) of users publishing sentiment data (e.g., reviews, blogs). Although traditional classification algorithms can be used to train sentiment classifiers from manually labeled text data, the labeling work can be time-consuming and expensive. Meanwhile, users often use some different words when they express sentiment in different domains. If we directly apply a classifier trained in one domain to other domains, the performance will be very low due to the differences between these domains. In this work, we develop a general solution to sentiment classification when we do not have any labels in a target domain but have some labeled data in a different domain, regarded as source domain. In this cross-domain sentiment classification setting, to bridge the gap between the domains, we propose a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, with the help of domain-independent words as a bridge. In this way, the clusters can be used to reduce the gap between domain-specific words of the two domains, which can be used to train sentiment classifiers in the target domain accurately. Compared to previous approaches, SFA can discover a robust representation for cross-domain data by fully exploiting the relationship between the domain-specific and domain-independent words via simultaneously co-clustering them in a common latent space. We perform extensive experiments on two real world datasets, and demonstrate that SFA significantly outperforms previous approaches to cross-domain sentiment classification.", "", "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.", "", "Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30 over the original SCL algorithm and 46 over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains." ] }
1811.11064
2902671857
Many modern machine learning approaches require vast amounts of training data to learn new concepts; conversely, human learning often requires few examples--sometimes only one--from which the learner can abstract structural concepts. We present a novel approach to introducing new spatial structures to an AI agent, combining deep learning over qualitative spatial relations with various heuristic search algorithms. The agent extracts spatial relations from a sparse set of noisy examples of block-based structures, and trains convolutional and sequential models of those relation sets. To create novel examples of similar structures, the agent begins placing blocks on a virtual table, uses a CNN to predict the most similar complete example structure after each placement, an LSTM to predict the most likely set of remaining moves needed to complete it, and recommends one using heuristic search. We verify that the agent learned the concept by observing its virtual block-building activities, wherein it ranks each potential subsequent action toward building its learned concept. We empirically assess this approach with human participants' ratings of the block structures. Initial results and qualitative evaluations of structures generated by the trained agent show where it has generalized concepts from the training data, which heuristics perform best within the search space, and how we might improve learning and execution.
Learning definitions of primitives has long been an area of study in machine learning @cite_11 . Confronting a new class of problem by drawing on similar examples renders the task decidable, as does the ability to break down a complex task into simpler ones @cite_24 @cite_15 @cite_5 @cite_51 @cite_49 . Recommendation systems at large propose future choices based on previous ones, which can be considered special cases of moves" from one situation to another @cite_48 . Often an example must be adapted to a new situation of identifiable but low similarity, and this knowledge adaptation has also made use of recent advances in machine learning @cite_2 .
{ "cite_N": [ "@cite_48", "@cite_24", "@cite_49", "@cite_2", "@cite_5", "@cite_15", "@cite_51", "@cite_11" ], "mid": [ "", "2107657559", "2949310145", "2146059992", "1898340191", "1733761130", "2185243164", "" ], "abstract": [ "", "Automatic detection of dynamic events in video sequences has a variety of applications including visual surveillance and monitoring, video highlight extraction, intelligent transportation systems, video summarization, and many more. Learning an accurate description of the various events in real-world scenes is challenging owing to the limited user-labeled data as well as the large variations in the pattern of the events. Pattern differences arise either due to the nature of the events themselves such as the spatio-temporal events or due to missing or ambiguous data interpretation using computer vision methods. In this work, we introduce a novel method for representing and classifying events in video sequences using reversible context-free grammars. The grammars are learned using a semi-supervised learning method. More concretely, by using the classification entropy as a heuristic cost function, the grammars are iteratively learned using a search method. Experimental results demonstrating the efficacy of the learning algorithm and the event detection method applied to traffic video sequences are presented.", "We propose a new task of unsupervised action detection by action matching. Given two long videos, the objective is to temporally detect all pairs of matching video segments. A pair of video segments are matched if they share the same human action. The task is category independent---it does not matter what action is being performed---and no supervision is used to discover such video segments. Unsupervised action detection by action matching allows us to align videos in a meaningful manner. As such, it can be used to discover new action categories or as an action proposal technique within, say, an action detection pipeline. Moreover, it is a useful pre-processing step for generating video highlights, e.g., from sports videos. We present an effective and efficient method for unsupervised action detection. We use an unsupervised temporal encoding method and exploit the temporal consistency in human actions to obtain candidate action segments. We evaluate our method on this challenging task using three activity recognition benchmarks, namely, the MPII Cooking activities dataset, the THUMOS15 action detection benchmark and a new dataset called the IKEA dataset. On the MPII Cooking dataset we detect action segments with a precision of 21.6 and recall of 11.7 over 946 long video pairs and over 5000 ground truth action segments. Similarly, on THUMOS dataset we obtain 18.4 precision and 25.1 recall over 5094 ground truth action segment pairs.", "Case-Based Reasoning systems retrieve and reuse solutions for previously solved problems that have been encountered and remembered as cases. In some domains, particularly where the problem solving is a classification task, the retrieved solution can be reused directly. But for design tasks it is common for the retrieved solution to be regarded as an initial solution that should be refined to reflect the differences between the new and retrieved problems. The acquisition of adaptation knowledge to achieve this refinement can be demanding, despite the fact that the knowledge source of stored cases captures a substantial part of the problem-solving expertise. This paper describes an introspective learning approach where the case knowledge itself provides a source from which training data for the adaptation task can be assembled. Different learning algorithms are explored and the effect of the learned adaptations is demonstrated for a demanding component-based pharmaceutical design task, tablet formulation. The evaluation highlights the incremental nature of adaptation as a further reasoning step after nearest-neighbour retrieval. A new property-based classification to adapt symbolic values is proposed, and an ensemble of these property-based adaptation classifiers has been particularly successful for the most difficult of the symbolic adaptation tasks in tablet formulation.", "We focus on modeling human activities comprising multiple actions in a completely unsupervised setting. Our model learns the high-level action co-occurrence and temporal relations between the actions in the activity video. We consider the video as a sequence of short-term action clips, called action-words, and an activity is about a set of action-topics indicating which actions are present in the video. Then we propose a new probabilistic model relating the action-words and the action-topics. It allows us to model long-range action relations that commonly exist in the complex activity, which is challenging to capture in the previous works. We apply our model to unsupervised action segmentation and recognition, and also to a novel application that detects forgotten actions, which we call action patching. For evaluation, we also contribute a new challenging RGB-D activity video dataset recorded by the new Kinect v2, which contains several human daily activities as compositions of multiple actions interacted with different objects. The extensive experiments show the effectiveness of our model.", "Event models obtained automatically from video can be used in applications ranging from abnormal event detection to content based video retrieval. When multiple agents are involved in the events, characterizing events naturally suggests encoding interactions as relations. Learning event models from this kind of relational spatio-temporal data using relational learning techniques such as Inductive Logic Programming (ILP) hold promise, but have not been successfully applied to very large datasets which result from video data. In this paper, we present a novel framework remind (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP. Efficiency is achieved through the learning from interpretations setting and using a typing system that exploits the type hierarchy of objects in a domain. The use of types also helps prevent over generalization. Furthermore, we also present a type-refining operator and prove that it is optimal. The learned models can be used for recognizing events from previously unseen videos. We also present an extension to the framework by integrating an abduction step that improves the learning performance when there is noise in the input data. The experimental results on several hours of video data from two challenging real world domains (an airport domain and a physical action verbs domain) suggest that the techniques are suitable to real world scenarios.", "We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.", "" ] }
1811.11251
2902239567
This paper presents a semi-supervised learning framework to train a keypoint detector using multiview image streams given the limited labeled data (typically @math 4 ). We leverage the complementary relationship between multiview geometry and visual tracking to provide three types of supervisionary signals to utilize the unlabeled data: (1) keypoint detection in one view can be supervised by other views via the epipolar geometry; (2) a keypoint moves smoothly over time where its optical flow can be used to temporally supervise consecutive image frames to each other; (3) visible keypoint in one view is likely to be visible in the adjacent view. We integrate these three signals in a differentiable fashion to design a new end-to-end neural network composed of three pathways. This design allows us to extensively use the unlabeled data to train the keypoint detector. We show that our approach outperforms existing detectors including DeepLabCut tailored to the keypoint detection of non-human species such as monkeys, dogs, and mice.
Multiview images possess highly redundant yet distinctive visual information that can be used to self-supervise the unlabeled data. Bootstrapping is a common practice: to use multiview images to robustly reconstruct the geometry using the correspondences and to project to the unlabeled images to provide a pseudo-label, which has been shown highly effective @cite_2 @cite_31 @cite_18 . A pitfall of this approach is that it involves with an iterative process over learning and reconstruction. Another approach is to separately learn depth from a single view image in isolation that can be used for self-supervision @cite_5 @cite_21 @cite_26 . This relies on the depth prediction where the accuracy of the trained model is bounded by the accuracy of reconstruction prediction. @cite_3 introduces a new framework that bypasses 3D reconstruction during the training process through the epipolar constraint, i.e., the epipolar constraint is transformed to the distribution matching. The problem of this approach is that its performance is highly dependent on the pre-trained model. It has no reasoning about outliers, i.e., the recognition network converges to a trivial solution if the outliers dominate the distribution of the multiview pose detection.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_21", "@cite_3", "@cite_2", "@cite_5", "@cite_31" ], "mid": [ "2963149945", "2609883120", "2609026071", "", "2963488642", "2778680124", "2608018946" ], "abstract": [ "We introduce SE3-Nets which are deep neural networks designed to model and learn rigid body motion from raw point cloud data. Based only on sequences of depth images along with action vectors and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE(3) transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks. Additional experiments with a depth camera observing a Baxter robot pushing objects on a table show that SE3-Nets also work well on real data.", "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.", "We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.", "", "We present an approach that uses a multi-camera system to train fine-grained detectors for keypoints that are prone to occlusion, such as the joints of a hand. We call this procedure multiview bootstrapping: first, an initial keypoint detector is used to produce noisy labels in multiple views of the hand. The noisy detections are then triangulated in 3D using multiview geometry or marked as outliers. Finally, the reprojected triangulations are used as new labeled training data to improve the detector. We repeat this process, generating more labeled data in each iteration. We derive a result analytically relating the minimum number of views to achieve target true and false positive rates for a given detector. The method is used to train a hand keypoint detector for single images. The resulting keypoint detector runs in realtime on RGB images and has accuracy comparable to methods that use depth sensors. The single view detector, triangulated over multiple views, enables 3D markerless hand motion capture with complex object interactions.", "We describe Human Mesh Recovery (HMR), an end-to-end framework for reconstructing a full 3D mesh of a human body from a single RGB image. In contrast to most current methods that compute 2D or 3D joint locations, we produce a richer and more useful mesh representation that is parameterized by shape and 3D joint angles. The main objective is to minimize the reprojection loss of keypoints, which allow our model to be trained using in-the-wild images that only have ground truth 2D annotations. However, reprojection loss alone is highly under constrained. In this work we address this problem by introducing an adversary trained to tell whether a human body parameter is real or not using a large database of 3D human meshes. We show that HMR can be trained with and without using any coupled 2D-to-3D supervision. We do not rely on intermediate 2D keypoint detection and infer 3D pose and shape parameters directly from image pixels. Our model runs in real-time given a bounding box containing the person. We demonstrate our approach on various images in-the-wild and out-perform previous optimizationbased methods that output 3D meshes and show competitive results on tasks such as 3D joint location estimation and part segmentation.", "We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frame-to-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the re-projection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfM-Net extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided." ] }
1811.11251
2902239567
This paper presents a semi-supervised learning framework to train a keypoint detector using multiview image streams given the limited labeled data (typically @math 4 ). We leverage the complementary relationship between multiview geometry and visual tracking to provide three types of supervisionary signals to utilize the unlabeled data: (1) keypoint detection in one view can be supervised by other views via the epipolar geometry; (2) a keypoint moves smoothly over time where its optical flow can be used to temporally supervise consecutive image frames to each other; (3) visible keypoint in one view is likely to be visible in the adjacent view. We integrate these three signals in a differentiable fashion to design a new end-to-end neural network composed of three pathways. This design allows us to extensively use the unlabeled data to train the keypoint detector. We show that our approach outperforms existing detectors including DeepLabCut tailored to the keypoint detection of non-human species such as monkeys, dogs, and mice.
Our main hypothesis is that these two supervisions are complementary. We formulate the spatiotemporal supervision that can benefit from both and address each limitation. (1) We use dense optical flow tracking to address noisy supervision, i.e., it is unlikely that the noisy prediction is temporally correlated. (2) We leverage the end-to-end epipolar distribution matching to avoid the multimodality issue that arises using the soft-argmax operation. This is differentiable, and therefore, trainable. (3) The multiview image streams can alleviate the tracking drift @cite_30 @cite_22 , i.e., it is unlikely that the tracking drift occurs in a geometrically consistent fashion. (4) Visibility map can assist to determine the validity of the tracking without explicit outlier rejection.
{ "cite_N": [ "@cite_30", "@cite_22" ], "mid": [ "1980495828", "2964351057" ], "abstract": [ "Many traditional challenges in reconstructing 3D motion, such as matching across wide baselines and handling occlusion, reduce in significance as the number of unique viewpoints increases. However, to obtain this benefit, a new challenge arises: estimating precisely which cameras observe which points at each instant in time. We present a maximum a posteriori (MAP) estimate of the time-varying visibility of the target points to reconstruct the 3D motion of an event from a large number of cameras. Our algorithm takes, as input, camera poses and image sequences, and outputs the time-varying set of the cameras in which a target patch is visibile and its reconstructed trajectory. We model visibility estimation as a MAP estimate by incorporating various cues including photometric consistency, motion consistency, and geometric consistency, in conjunction with a prior that rewards consistent visibilities in proximal cameras. An optimal estimate of visibility is obtained by finding the minimum cut of a capacitated graph over cameras. We demonstrate that our method estimates visibility with greater accuracy, and increases tracking performance producing longer trajectories, at more locations, and at higher accuracies than methods that ignore visibility or use photometric consistency alone.", "This paper presents a method to assign a semantic label to a 3D reconstructed trajectory from multiview image streams. The key challenge of the semantic labeling lies in the self-occlusion and photometric inconsistency caused by object and social interactions, resulting in highly fragmented trajectory reconstruction with noisy semantic labels. We address this challenge by introducing a new representation called 3D semantic map-a probability distribution over labels per 3D trajectory constructed by a set of semantic recognition across multiple views. Our conjecture is that among many views, there exist a set of views that are more informative than the others. We build the 3D semantic map based on a likelihood of visibility and 2D recognition confidence and identify the view that best represents the semantics of the trajectory. We use this 3D semantic map and trajectory affinity computed by local rigid transformation to precisely infer labels as a whole. This global inference quantitatively outperforms the baseline approaches in terms of predictive validity, representation robustness, and affinity effectiveness. We demonstrate that our algorithm can robustly compute the semantic labels of a large scale trajectory set (e.g., millions of trajectories) involving real-world human interactions with object, scenes, and people." ] }
1811.10990
2953328281
Neural network-based Open-ended conversational agents automatically generate responses based on predictive models learned from a large number of pairs of utterances. The generated responses are typically acceptable as a sentence but are often dull, generic, and certainly devoid of any emotion. In this paper, we present neural models that learn to express a given emotion in the generated response. We propose four models and evaluate them against 3 baselines. An encoder-decoder framework-based model with multiple attention layers provides the best overall performance in terms of expressing the required emotion. While it does not outperform other models on all emotions, it presents promising results in most cases.
Having tweets labeled with emotions, training a classifier is a task of supervised text classification which has already been a well-studied area @cite_29 @cite_14 . The recent state-of-art models are usually neural network models @cite_7 with pre-trained
{ "cite_N": [ "@cite_29", "@cite_14", "@cite_7" ], "mid": [ "2061495585", "2597655663", "2950141408" ], "abstract": [ "Question Answering (QA) is undoubtedly a growing field of current research in Artificial Intelligence. Question classification, a QA subtask, aims to associate a category to each question, typically representing the semantic class of its answer. This step is of major importance in the QA process, since it is the basis of several key decisions. For instance, classification helps reducing the number of possible answer candidates, as only answers matching the question category should be taken into account. This paper presents and evaluates a rule-based question classifier that partially founds its performance in the detection of the question headword and in its mapping into the target category through the use of WordNet. Moreover, we use the rule-based classifier as a features' provider of a machine learning-based question classifier. A detailed analysis of the rule-base contribution is presented. Despite using a very compact feature space, state of the art results are obtained.", "This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification, and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.", "Recurrent Neural Network (RNN) is one of the most popular architectures used in Natural Language Processsing (NLP) tasks because its recurrent structure is very suitable to process variable-length text. RNN can utilize distributed representations of words by first converting the tokens comprising each text into vectors, which form a matrix. And this matrix includes two dimensions: the time-step dimension and the feature vector dimension. Then most existing models usually utilize one-dimensional (1D) max pooling operation or attention-based operation only on the time-step dimension to obtain a fixed-length vector. However, the features on the feature vector dimension are not mutually independent, and simply applying 1D pooling operation over the time-step dimension independently may destroy the structure of the feature representation. On the other hand, applying two-dimensional (2D) pooling operation over the two dimensions may sample more meaningful features for sequence modeling tasks. To integrate the features on both dimensions of the matrix, this paper explores applying 2D max pooling operation to obtain a fixed-length representation of the text. This paper also utilizes 2D convolution to sample more meaningful information of the matrix. Experiments are conducted on six text classification tasks, including sentiment analysis, question classification, subjectivity classification and newsgroup classification. Compared with the state-of-the-art models, the proposed models achieve excellent performance on 4 out of 6 tasks. Specifically, one of the proposed models achieves highest accuracy on Stanford Sentiment Treebank binary classification and fine-grained classification tasks." ] }
1811.10990
2953328281
Neural network-based Open-ended conversational agents automatically generate responses based on predictive models learned from a large number of pairs of utterances. The generated responses are typically acceptable as a sentence but are often dull, generic, and certainly devoid of any emotion. In this paper, we present neural models that learn to express a given emotion in the generated response. We propose four models and evaluate them against 3 baselines. An encoder-decoder framework-based model with multiple attention layers provides the best overall performance in terms of expressing the required emotion. While it does not outperform other models on all emotions, it presents promising results in most cases.
With the rise of deep learning, the success of the technology was also demonstrated in automatic response generation. The Sequence-to-sequence (Seq2seq) model which was shown effective in machine translation @cite_11 , was adopted in response generation for open domain dialogue systems @cite_16 . Instead of predicting a sequence of words in the target language from a sequence of words in the source language, the idea is to predict a sequence of words as a response of another sequence of words. In a nutshell, Seq2seq models are a class of models that learn to generate a sequence of words given another sequence of words as input. Many works based on this framework have been conducted to improve the response quality from different points of view. Reinforcement learning has also been adopted to force the model to have longer discussions @cite_1 . @cite_32 proposed a hierarchical framework to process context more naturally. Moreover, there are also attempts to avoid generating dull, short responses @cite_25 @cite_17 .
{ "cite_N": [ "@cite_1", "@cite_32", "@cite_17", "@cite_16", "@cite_25", "@cite_11" ], "mid": [ "2410983263", "2963790827", "2581637843", "1591706642", "2590513900", "2130942839" ], "abstract": [ "Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.", "", "", "Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.", "People speak at different levels of specificity in different situations. Depending on their knowledge, interlocutors, mood, etc. A conversational agent should have this ability and know when to be specific and when to be general. We propose an approach that gives a neural network--based conversational agent this ability. Our approach involves alternating between and model training : removing training examples that are closest to the responses most commonly produced by the model trained from the last round and then retrain the model on the remaining dataset. Dialogue generation models trained with different degrees of data distillation manifest different levels of specificity. We then train a reinforcement learning system for selecting among this pool of generation models, to choose the best level of specificity for a given input. Compared to the original generative model trained without distillation, the proposed system is capable of generating more interesting and higher-quality responses, in addition to appropriately adjusting specificity depending on the context. Our research constitutes a specific case of a broader approach involving training multiple subsystems from a single dataset distinguished by differences in a specific property one wishes to model. We show that from such a set of subsystems, one can use reinforcement learning to build a system that tailors its output to different input contexts at test time.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier." ] }
1811.11209
2902965877
3D point cloud is an efficient and flexible representation of 3D structures. Recently, neural networks operating on point clouds have shown superior performance on tasks such as shape classification and part segmentation. However, performance on these tasks are evaluated using complete, aligned shapes, while real world 3D data are partial and unaligned. A key challenge in learning from unaligned point cloud data is how to attain invariance or equivariance with respect to geometric transformations. To address this challenge, we propose a novel transformer network that operates on 3D point clouds, named Iterative Transformer Network (IT-Net). Different from existing transformer networks, IT-Net predicts a 3D rigid transformation using an iterative refinement scheme inspired by classical image and point cloud alignment algorithms. We demonstrate that models using IT-Net achieves superior performance over baselines on the classification and segmentation of partial, unaligned 3D shapes. Further, we provide an analysis on the efficacy of the iterative refinement scheme on estimating accurate object poses from partial observations.
Traditional point cloud features @cite_14 @cite_15 @cite_9 rely on statistical properties of points such as local curvatures. They do not encode semantic information and it is non-trivial to find the combination of features that is optimal for specific tasks.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_14" ], "mid": [ "2100657858", "2160821342", "2091791686" ], "abstract": [ "We propose a novel point signature based on the properties of the heat diffusion process on a shape. Our signature, called the Heat Kernel Signature (or HKS), is obtained by restricting the well-known heat kernel to the temporal domain. Remarkably we show that under certain mild assumptions, HKS captures all of the information contained in the heat kernel, and characterizes the shape up to isometry. This means that the restriction to the temporal domain, on the one hand, makes HKS much more concise and easily commensurable, while on the other hand, it preserves all of the information about the intrinsic geometry of the shape. In addition, HKS inherits many useful properties from the heat kernel, which means, in particular, that it is stable under perturbations of the shape. Our signature also provides a natural and efficiently computable multi-scale way to capture information about neighborhoods of a given point, which can be extremely useful in many applications. To demonstrate the practical relevance of our signature, we present several methods for non-rigid multi-scale matching based on the HKS and use it to detect repeated structure within the same shape and across a collection of shapes.", "In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment).", "We introduce the Wave Kernel Signature (WKS) for characterizing points on non-rigid three-dimensional shapes. The WKS represents the average probability of measuring a quantum mechanical particle at a specific location. By letting vary the energy of the particle, the WKS encodes and separates information from various different Laplace eigenfrequencies. This clear scale separation makes the WKS well suited for a large variety of applications. Both theoretically and in quantitative experiments we demonstrate that the WKS is substantially more discriminative and therefore allows for better feature matching than the commonly used Heat Kernel Signature (HKS). As an application of the WKS in shape analysis we show results on shape matching." ] }
1811.11209
2902965877
3D point cloud is an efficient and flexible representation of 3D structures. Recently, neural networks operating on point clouds have shown superior performance on tasks such as shape classification and part segmentation. However, performance on these tasks are evaluated using complete, aligned shapes, while real world 3D data are partial and unaligned. A key challenge in learning from unaligned point cloud data is how to attain invariance or equivariance with respect to geometric transformations. To address this challenge, we propose a novel transformer network that operates on 3D point clouds, named Iterative Transformer Network (IT-Net). Different from existing transformer networks, IT-Net predicts a 3D rigid transformation using an iterative refinement scheme inspired by classical image and point cloud alignment algorithms. We demonstrate that models using IT-Net achieves superior performance over baselines on the classification and segmentation of partial, unaligned 3D shapes. Further, we provide an analysis on the efficacy of the iterative refinement scheme on estimating accurate object poses from partial observations.
PointNet @cite_2 @cite_21 proposes a way to extract semantic and task-specific features from point clouds using a deep neural network. The key idea of PointNet is to use a symmetric function (e.g. max-pooling) to aggregate pointwise features so that the global feature is invariant to permutations of the points. A drawback of PointNet is that it does not account for local interactions among points. Thus, several extensions @cite_20 @cite_13 which augment the input with information from local neighborhoods of points have been proposed.
{ "cite_N": [ "@cite_21", "@cite_13", "@cite_20", "@cite_2" ], "mid": [ "2624503621", "2785053089", "", "2560609797" ], "abstract": [ "Few prior works study deep learning on point sets. PointNet by is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.", "Point clouds provide a flexible and scalable geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. Hence, the design of intelligent computational models that act directly on point clouds is critical, especially when efficiency considerations or noise preclude the possibility of expensive denoising and meshing procedures. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv is differentiable and can be plugged into existing architectures. Compared to existing modules operating largely in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked or recurrently applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. Beyond proposing this module, we provide extensive evaluation and analysis revealing that EdgeConv captures and exploits fine-grained geometric properties of point clouds. The proposed approach achieves state-of-the-art performance on standard benchmarks including ModelNet40 and S3DIS.", "", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption." ] }
1811.11209
2902965877
3D point cloud is an efficient and flexible representation of 3D structures. Recently, neural networks operating on point clouds have shown superior performance on tasks such as shape classification and part segmentation. However, performance on these tasks are evaluated using complete, aligned shapes, while real world 3D data are partial and unaligned. A key challenge in learning from unaligned point cloud data is how to attain invariance or equivariance with respect to geometric transformations. To address this challenge, we propose a novel transformer network that operates on 3D point clouds, named Iterative Transformer Network (IT-Net). Different from existing transformer networks, IT-Net predicts a 3D rigid transformation using an iterative refinement scheme inspired by classical image and point cloud alignment algorithms. We demonstrate that models using IT-Net achieves superior performance over baselines on the classification and segmentation of partial, unaligned 3D shapes. Further, we provide an analysis on the efficacy of the iterative refinement scheme on estimating accurate object poses from partial observations.
Most datasets @cite_11 @cite_7 used to evaluate feature learning on point clouds consist of complete point clouds. A few works @cite_2 @cite_1 have investigated feature learning from partial point clouds. However, they all assume that the point clouds are aligned in a canonical coordinate system. In this work, we show how to remove this assumption using a transformer network.
{ "cite_N": [ "@cite_1", "@cite_2", "@cite_7", "@cite_11" ], "mid": [ "2951138483", "2560609797", "2553307952", "2951755740" ], "abstract": [ "Shape completion, the problem of estimating the complete geometry of objects from partial observations, lies at the core of many vision and robotics applications. In this work, we propose Point Completion Network (PCN), a novel learning-based approach for shape completion. Unlike existing shape completion methods, PCN directly operates on raw point clouds without any structural assumption (e.g. symmetry) or annotation (e.g. semantic class) about the underlying shape. It features a decoder design that enables the generation of fine-grained completions while maintaining a small number of parameters. Our experiments show that PCN produces dense, complete point clouds with realistic structures in the missing regions on inputs with various levels of incompleteness and noise, including cars from LiDAR scans in the KITTI dataset.", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "Large repositories of 3D shapes provide valuable input for data-driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate semantic region annotations. Given a shape collection and a user-specified region label our goal is to correctly demarcate the corresponding regions with minimal manual work. Our active framework achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility function that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human efficiency. We demonstrate that incorporating verification of all produced labelings within this unified objective improves both accuracy and efficiency of the active learning procedure. We automatically propagate human labels across a dynamic shape network using a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be significantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by annotating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collection more than one order of magnitude larger than existing ones.", "3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks." ] }
1811.11209
2902965877
3D point cloud is an efficient and flexible representation of 3D structures. Recently, neural networks operating on point clouds have shown superior performance on tasks such as shape classification and part segmentation. However, performance on these tasks are evaluated using complete, aligned shapes, while real world 3D data are partial and unaligned. A key challenge in learning from unaligned point cloud data is how to attain invariance or equivariance with respect to geometric transformations. To address this challenge, we propose a novel transformer network that operates on 3D point clouds, named Iterative Transformer Network (IT-Net). Different from existing transformer networks, IT-Net predicts a 3D rigid transformation using an iterative refinement scheme inspired by classical image and point cloud alignment algorithms. We demonstrate that models using IT-Net achieves superior performance over baselines on the classification and segmentation of partial, unaligned 3D shapes. Further, we provide an analysis on the efficacy of the iterative refinement scheme on estimating accurate object poses from partial observations.
Spatial Transformer Network (STN) @cite_22 is a network module that performs explicit geometric transformations on the input data. STN can be thought of as a geometry predictor which models the complicated non-linear relationship between the appearance of the image and geometric transformations. It can be trained jointly with classification networks and has the benefit of introducing invariance to geometric transformations.
{ "cite_N": [ "@cite_22" ], "mid": [ "2951005624" ], "abstract": [ "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations." ] }
1811.11209
2902965877
3D point cloud is an efficient and flexible representation of 3D structures. Recently, neural networks operating on point clouds have shown superior performance on tasks such as shape classification and part segmentation. However, performance on these tasks are evaluated using complete, aligned shapes, while real world 3D data are partial and unaligned. A key challenge in learning from unaligned point cloud data is how to attain invariance or equivariance with respect to geometric transformations. To address this challenge, we propose a novel transformer network that operates on 3D point clouds, named Iterative Transformer Network (IT-Net). Different from existing transformer networks, IT-Net predicts a 3D rigid transformation using an iterative refinement scheme inspired by classical image and point cloud alignment algorithms. We demonstrate that models using IT-Net achieves superior performance over baselines on the classification and segmentation of partial, unaligned 3D shapes. Further, we provide an analysis on the efficacy of the iterative refinement scheme on estimating accurate object poses from partial observations.
Inverse Compositional Spatial Transformer Network (IC-STN) @cite_8 is an extension of STN that makes use of an iterative alignment scheme analogous to the Lucas-Kanade algorithm @cite_12 . It demonstrates that geometric transformations can be predicted from images more accurately in an iterative fashion.
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "2118877769", "2949440248" ], "abstract": [ "Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system.", "In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs). STNs are of interest to the vision and learning communities due to their natural ability to combine alignment and classification within the same theoretical framework. Inspired by the Inverse Compositional (IC) variant of the LK algorithm, we present Inverse Compositional Spatial Transformer Networks (IC-STNs). We demonstrate that IC-STNs can achieve better performance than conventional STNs with less model capacity; in particular, we show superior performance in pure image alignment tasks as well as joint alignment classification problems on real-world problems." ] }
1811.11242
2902696364
Data scientists spend the majority of their time on preparing data for analysis. One of the first steps in this preparation phase is to load the data from the raw storage format. Comma-separated value (CSV) files are a popular format for tabular data due to their simplicity and ostensible ease of use. However, formatting standards for CSV files are not followed consistently, so each file requires manual inspection and potentially repair before the data can be loaded, an enormous waste of human effort for a task that should be one of the simplest parts of data science. The first and most essential step in retrieving data from CSV files is deciding on the dialect of the file, such as the cell delimiter and quote character. Existing dialect detection approaches are few and non-robust. In this paper, we propose a dialect detection method based on a novel measure of data consistency of parsed data files. Our method achieves 97 overall accuracy on a large corpus of real-world CSV files and improves the accuracy on messy CSV files by almost 22 compared to existing approaches, including those in the Python standard library. Our measure of data consistency is not specific to the data parsing problem, and has potential for more general applicability.
Other work related to CSV parsing is the work on DeExcelerator by @cite_0 . This work focuses mainly on spreadsheets in Excel format, but can also handle CSV files when the correct dialect is provided by the user. The DeExcelerator program then extracts tables from the spreadsheet and in the process performs header recognition, data type recognition, and value extrapolation, and other operations based on heuristic rules. In follow-up work, koci2016machine present a method for classifying the role of each cell in the spreadsheet (i.e. attribute, data, header, etc.) using surface-level features derived from the formatting of the text and classification methods such as decision trees and support vector machines. While the DeExcelerator package offers no method for detecting the dialect of CSV files, the methods for table and header detection are suitable to the general CSV parsing problem outlined above.
{ "cite_N": [ "@cite_0" ], "mid": [ "1974329121" ], "abstract": [ "Of the structured data published on the web, for instance as datasets on Open Data Platforms such as data.gov, but also in the form of HTML tables on the general web, only a small part is in a relational form. Instead the data is intermingled with formatting, layout and textual metadata, i.e., it is contained in partially structured documents. This makes transformation into a true relational form necessary, which is a precondition for most forms of data analysis and data integration. Studying data.gov as an example source for partially structured documents, we present a classification of typical normalization problems. We then present the DeExcelerator, which is a framework for extracting relations from partially structured documents such as spreadsheets and HTML tables." ] }
1906.11711
2953608346
Many recommendation algorithms suffer from popularity bias: a small number of popular items being recommended too frequently, while other items get insufficient exposure. Research in this area so far has concentrated on a one-shot representation of this bias, and on algorithms to improve the diversity of individual recommendation lists. In this work, we take a time-sensitive view of popularity bias, in which the algorithm assesses its long-tail coverage at regular intervals, and compensates in the present moment for omissions in the past. In particular, we present a temporal version of the well-known xQuAD diversification algorithm adapted for long-tail recommendation. Experimental results on two public datasets show that our method is more effective in terms of the long-tail coverage and accuracy tradeoff compared to some other existing approaches.
Item popularity and its impact on recommendation quality has been explored by some researchers @cite_20 @cite_7 . These authors tried to improve the performance of the recommender system in terms of accuracy and precision, given the long-tail in the ratings. Our work, instead, focuses on reducing popularity bias and balancing the representation of items across the popularity distribution.
{ "cite_N": [ "@cite_7", "@cite_20" ], "mid": [ "2023954349", "2096233795" ], "abstract": [ "The paper studies the Long Tail problem of recommender systems when many items in the Long Tail have only few ratings, thus making it hard to use them in recommender systems. The approach presented in the paper splits the whole itemset into the head and the tail parts and clusters only the tail items. Then recommendations for the tail items are based on the ratings in these clusters and for the head items on the ratings of individual items. If such partition and clustering are done properly, we show that this reduces the recommendation error rates for the tail items, while maintaining reasonable computational performance.", "Dozens of markets of all types are in the early stages of a revolution as the Internet and related technologies vastly expand the variety of products that can be produced, promoted and purchased. Although this revolution is based on a simple set of economic and technological drivers, the authors argue that its implications are far-reaching for managers, consumers and the economy as a whole. This article looks at what has been dubbed the \"Long Tail\" phenomenon, examining how customers derive value from an important characteristic of Internet markets: the ability of online merchants to help consumers locate, evaluate and purchase a far wider range of products than they can typically buy via the traditional brick-and-mortar channels. The article examines the Long Tail from both the supply side and the demand side and identifies several key drivers. On the supply side, the authors point out how e-tailers' expanded, centralized warehousing allows for more offerings, thus making it possible for them to cater to more varied tastes. On the demand side, tools such as search engines, recommender software and sampling tools are allowing customers to find products outside of their geographic area. The authors also look toward the future to discuss second order amplified effects of Long Tail, including the growth of markets serving smaller niches." ] }
1906.11711
2953608346
Many recommendation algorithms suffer from popularity bias: a small number of popular items being recommended too frequently, while other items get insufficient exposure. Research in this area so far has concentrated on a one-shot representation of this bias, and on algorithms to improve the diversity of individual recommendation lists. In this work, we take a time-sensitive view of popularity bias, in which the algorithm assesses its long-tail coverage at regular intervals, and compensates in the present moment for omissions in the past. In particular, we present a temporal version of the well-known xQuAD diversification algorithm adapted for long-tail recommendation. Experimental results on two public datasets show that our method is more effective in terms of the long-tail coverage and accuracy tradeoff compared to some other existing approaches.
A regularization-based approach to improving long tail recommendations is found in @cite_2 . One limitation with that work is that it is restricted to factorization models where the long-tail preference can be encoded in terms of the latent factors. This algorithm does not account for differential user tolerance towards long-tail items. A re-ranking approach can be applied to any algorithm, and in our implementation, we also take personalization of long-tail promotion into account.
{ "cite_N": [ "@cite_2" ], "mid": [ "2748058847" ], "abstract": [ "Many recommendation algorithms suffer from popularity bias in their output: popular items are recommended frequently and less popular ones rarely, if at all. However, less popular, long-tail items are precisely those that are often desirable recommendations. In this paper, we introduce a flexible regularization-based framework to enhance the long-tail coverage of recommendation lists in a learning-to-rank algorithm. We show that regularization provides a tunable mechanism for controlling the trade-off between accuracy and coverage. Moreover, the experimental results using two data sets show that it is possible to improve coverage of long tail items without substantial loss of ranking performance." ] }
1906.11711
2953608346
Many recommendation algorithms suffer from popularity bias: a small number of popular items being recommended too frequently, while other items get insufficient exposure. Research in this area so far has concentrated on a one-shot representation of this bias, and on algorithms to improve the diversity of individual recommendation lists. In this work, we take a time-sensitive view of popularity bias, in which the algorithm assesses its long-tail coverage at regular intervals, and compensates in the present moment for omissions in the past. In particular, we present a temporal version of the well-known xQuAD diversification algorithm adapted for long-tail recommendation. Experimental results on two public datasets show that our method is more effective in terms of the long-tail coverage and accuracy tradeoff compared to some other existing approaches.
There is substantial research in recommendation diversity, where the goal is to avoid recommending too many similar items @cite_16 @cite_14 @cite_9 , including some research on personalized diversity where the amount of diversification is dependent on the user's tolerance @cite_6 @cite_1 . Another similar work to ours is @cite_21 where authors used a modified version of xQuAD for intent-oriented diversification of search results and recommendations. Another work that also used xQuAD in recommendation is @cite_13 where the authors used it to improve recommendation fairness in a microlending scenario.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_16", "@cite_13" ], "mid": [ "1579592807", "2097951507", "1969077952", "2865407912", "2729349341", "2111094216", "" ], "abstract": [ "This is an electronic version of the paper presented at the International Workshop on Diversity in Document Retrieval, held in Dublin on 2011", "The primary premise upon which top-N recommender systems operate is that similar users are likely to have similar tastes with regard to their product choices. For this reason, recommender algorithms depend deeply on similarity metrics to build the recommendation lists for end-users. However, it has been noted that the products offered on recommendation lists are often too similar to each other and attention has been paid towards the goal of improving diversity to avoid monotonous recommendations. Noting that the retrieval of a set of items matching a user query is a common problem across many applications of information retrieval, we model the competing goals of maximizing the diversity of the retrieved list while maintaining adequate similarity to the user query as a binary optimization problem. We explore a solution strategy to this optimization problem by relaxing it to a trust-region problem.This leads to a parameterized eigenvalue problem whose solution is finally quantized to the required binary solution. We apply this approach to the top-N prediction problem, evaluate the system performance on the Movielens dataset and compare it with a standard item-based top-N algorithm. A new evaluation metric ItemNovelty is proposed in this work. Improvements on both diversity and accuracy are obtained compared to the benchmark algorithm.", "The intent-oriented search diversification methods developed in the field so far tend to build on generative views of the retrieval system to be diversified. Core algorithm components in particular redundancy assessment are expressed in terms of the probability to observe documents, rather than the probability that the documents be relevant. This has been sometimes described as a view considering the selection of a single document in the underlying task model. In this paper we propose an alternative formulation of aspect-based diversification algorithms which explicitly includes a formal relevance model. We develop means for the effective computation of the new formulation, and we test the resulting algorithm empirically. We report experiments on search and recommendation tasks showing competitive or better performance than the original diversification algorithms. The relevance-based formulation has further interesting properties, such as unifying two well-known state of the art algorithms into a single version. The relevance-based approach opens alternative possibilities for further formal connections and developments as natural extensions of the framework. We illustrate this by modeling tolerance to redundancy as an explicit configurable parameter, which can be set to better suit the characteristics of the IR task, or the evaluation metrics, as we illustrate empirically.", "Diversity has been identified as one of the key dimensions of recommendation utility that should be considered besides the overall accuracy of the system. A common diversification approach is to rerank results produced by a baseline recommendation engine according to a diversification criterion. The intent-aware framework is one of the frameworks that has been proposed for recommendations diversification. It assumes existence of a set of aspects associated with items, which also represent user intentions, and the framework promotes diversity across the aspects to address user expectations more accurately. In this paper we consider item-based collaborative filtering and suggest that the traditional view of item similarity is lacking a user perspective. We argue that user preferences towards different aspects should be reflected in recommendations produced by the system. We incorporate the intent-aware framework into the item-based recommendation algorithm by injecting personalised intent-aware covariance into the item similarity measure, and explore the impact of such change on the performance of the algorithm. Our experiments show that the proposed method improves both accuracy and diversity of recommendations, offering better accuracy diversity tradeoff than existing solutions.", "Much of the focus of recommender systems research has been on the accurate prediction of users' ratings for unseen items. Recent work has suggested that objectives such as diversity and novelty in recommendations are also important factors in the effectiveness of a recommender system. However, methods that attempt to increase diversity of recommendation lists for all users without considering each user's preference or tolerance for diversity may lead to monotony for some users and to poor recommendations for others. Our goal in this research is to evaluate the hypothesis that users' propensity towards diversity varies greatly and that the diversity of recommendation lists should be consistent with the level of user interest in diverse recommendations. We propose a pre-filtering clustering approach to group users with similar levels of tolerance for diversity. Our contributions are twofold. First, we propose a method for personalizing diversity by performing collaborative filtering independently on different segments of users based on the degree of diversity in their profiles. Secondly, we investigate the accuracy-diversity tradeoffs using the proposed method across different user segments. As part of this evaluation we propose new metrics, adapted from information retrieval, that help us measure the effectiveness of our approach in personalizing diversity. Our experimental evaluation is based on two different datasets: MovieLens movie ratings, and Yelp restaurant reviews.", "Recommender systems use data on past user preferences to predict possible future likes and interests. A key challenge is that while the most useful individual recommendations are to be found among diverse niche objects, the most reliably accurate results are obtained by methods that recommend objects based on user or object similarity. In this paper we introduce a new algorithm specifically to address the challenge of diversity and show how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm. By tuning the hybrid appropriately we are able to obtain, without relying on any semantic or context-specific information, simultaneous gains in both accuracy and diversity of recommendations.", "" ] }
1906.11711
2953608346
Many recommendation algorithms suffer from popularity bias: a small number of popular items being recommended too frequently, while other items get insufficient exposure. Research in this area so far has concentrated on a one-shot representation of this bias, and on algorithms to improve the diversity of individual recommendation lists. In this work, we take a time-sensitive view of popularity bias, in which the algorithm assesses its long-tail coverage at regular intervals, and compensates in the present moment for omissions in the past. In particular, we present a temporal version of the well-known xQuAD diversification algorithm adapted for long-tail recommendation. Experimental results on two public datasets show that our method is more effective in terms of the long-tail coverage and accuracy tradeoff compared to some other existing approaches.
Temporal diversity and novelty has been also explored in @cite_19 where authors investigated how different algorithms perform in terms of diversity of the recommended item lists over time. Our work is one approach to improve the temporal novelty of the recommendations although our focus is more on the coverage of the item catalog rather than differences across lists.
{ "cite_N": [ "@cite_19" ], "mid": [ "2159155347" ], "abstract": [ "Collaborative Filtering (CF) algorithms, used to build web-based recommender systems, are often evaluated in terms of how accurately they predict user ratings. However, current evaluation techniques disregard the fact that users continue to rate items over time: the temporal characteristics of the system's top-N recommendations are not investigated. In particular, there is no means of measuring the extent that the same items are being recommended to users over and over again. In this work, we show that temporal diversity is an important facet of recommender systems, by showing how CF data changes over time and performing a user survey. We then evaluate three CF algorithms from the point of view of the diversity in the sequence of recommendation lists they produce over time. We examine how a number of characteristics of user rating patterns (including profile size and time between rating) affect diversity. We then propose and evaluate set methods that maximise temporal recommendation diversity without extensively penalising accuracy." ] }
1906.11761
2953786379
Identifying academic plagiarism is a pressing task for educational and research institutions, publishers, and funding agencies. Current plagiarism detection systems reliably find instances of copied and moderately reworded text. However, reliably detecting concealed plagiarism, such as strong paraphrases, translations, and the reuse of nontextual content and ideas is an open research problem. In this paper, we extend our prior research on analyzing mathematical content and academic citations. Both are promising approaches for improving the detection of concealed academic plagiarism primarily in Science, Technology, Engineering and Mathematics (STEM). We make the following contributions: i) We present a two-stage detection process that combines similarity assessments of mathematical content, academic citations, and text. ii) We introduce new similarity measures that consider the order of mathematical features and outperform the measures in our prior research. iii) We compare the effectiveness of the math-based, citation-based, and text-based detection approaches using confirmed cases of academic plagiarism. iv) We demonstrate that the combined analysis of math-based and citation-based content features allows identifying potentially suspicious cases in a collection of 102K STEM documents. Overall, we show that analyzing the similarity of mathematical content and academic citations is a striking supplement for conventional text-based detection approaches for academic literature in the STEM disciplines. The data and code of our study are openly available at https: purl.org hybridPD
Text retrieval research has yielded mature systems that reliably detect copied or moderately altered text in an input document and retrieve its source if the source is part of the system's reference collection. For the candidate retrieval stage, such systems typically employ character-gram or word-gram fingerprinting @cite_23 @cite_6 or term-based vector space models @cite_15 . For the detailed analysis stage, such systems often perform exhaustive string comparisons @cite_6 or computationally more efficient text alignment @cite_23 . Text alignment approaches typically use matching strings as seeds, which the procedures extend and then filter using heuristics @cite_29 .
{ "cite_N": [ "@cite_15", "@cite_29", "@cite_6", "@cite_23" ], "mid": [ "1679197109", "1790180460", "2121659786", "" ], "abstract": [ "Plagiarism is an illicit act which has become a prime concern mainly in educational and research domains. This deceitful act is usually referred as an intellectual theft which has swiftly increased with the rapid technological developments and information accessibility. Thus the need for a system mechanism for efficient plagiarism detection is at its urgency. In this paper, an investigation of different combined similarity metrics for extrinsic plagiarism detection is done and it focuses on unfolding the importance of combined similarity metrics over the commonly used single metric usage in plagiarism detection task. Further the impact of utilizing part of speech tagging (POS) in the plagiarism detection model is analyzed. Different combinations of the four single metrics, Cosine similarity, Dice coefficient, Match coefficient and Fuzzy-Semantic measure is used with and without POS tag information. These systems are evaluated using PAN1 -2014 training and test data set and results are analyzed and compared using standard PAN measures, viz, recall, precision, granularity and plagdet_score.", "The task of monolingual text alignment consists in finding similar text fragments between two given documents. It has applications in plagiarism detection, detection of text reuse, author identification, authoring aid, and information retrieval, to mention only a few. We describe our approach to the text alignment subtask of the plagiarism detection competition at PAN 2014, which resulted in the best-performing system at the PAN 2014 competition and outperforms the best-performing system of the PAN 2013 competition by the cumulative evaluation measure Plagdet. Our method relies on a sentence similarity measure based on a tf-idf-like weighting scheme that permits us to consider stopwords without increasing the rate of false positives. We introduce a recursive algorithm to extend the ranges of matching sentences to maximal length passages. We also introduce a novel filtering method to resolve overlapping plagiarism cases. Our system is available as open source.", "Extracting knowledge from document and Web pages for plagiarism detection.An information fusion based system for plagiarism detection in the educational institutions.Text mining algorithms for detecting plagiarism patterns in digital documents. Plagiarism refers to the act of presenting external words, thoughts, or ideas as one's own, without providing references to the sources from which they were taken. The exponential growth of different digital document sources available on the Web has facilitated the spread of this practice, making the accurate detection of it a crucial task for educational institutions. In this article, we present DOCODE 3.0, a Web system for educational institutions that performs automatic analysis of large quantities of digital documents in relation to their degree of originality. Since plagiarism is a complex problem, frequently tackled at different levels, our system applies algorithms in order to perform an information fusion process from multi data source to all these levels. These algorithms have been successfully tested in the scientific community in solving tasks like the identification of plagiarized passages and the retrieval of source candidates from the Web, among other multi data sources as digital libraries, and have proven to be very effective. We integrate these algorithms into a multi-tier, robust and scalable JEE architecture, allowing many different types of clients with different requirements to consume our services. For users, DOCODE produces a number of visualizations and reports from the different outputs to let teachers and professors gain insights on the originality of the documents they review, allowing them to discover, understand and handle possible plagiarism cases and making it easier and much faster to analyze a vast number of documents. Our experience here is so far focused on the Chilean situation and the Spanish language, offering solutions to Chilean educational institutions in any of their preferred Virtual Learning Environments. However, DOCODE can easily be adapted to increase language coverage.", "" ] }
1906.11761
2953786379
Identifying academic plagiarism is a pressing task for educational and research institutions, publishers, and funding agencies. Current plagiarism detection systems reliably find instances of copied and moderately reworded text. However, reliably detecting concealed plagiarism, such as strong paraphrases, translations, and the reuse of nontextual content and ideas is an open research problem. In this paper, we extend our prior research on analyzing mathematical content and academic citations. Both are promising approaches for improving the detection of concealed academic plagiarism primarily in Science, Technology, Engineering and Mathematics (STEM). We make the following contributions: i) We present a two-stage detection process that combines similarity assessments of mathematical content, academic citations, and text. ii) We introduce new similarity measures that consider the order of mathematical features and outperform the measures in our prior research. iii) We compare the effectiveness of the math-based, citation-based, and text-based detection approaches using confirmed cases of academic plagiarism. iv) We demonstrate that the combined analysis of math-based and citation-based content features allows identifying potentially suspicious cases in a collection of 102K STEM documents. Overall, we show that analyzing the similarity of mathematical content and academic citations is a striking supplement for conventional text-based detection approaches for academic literature in the STEM disciplines. The data and code of our study are openly available at https: purl.org hybridPD
To detect cross-lingual (CL) plagiarism, researchers have proposed approaches that leverage lexical similarities of languages, e.g., CL character @math -gram matching, employ thesauri, parallel corpora, and machine translation followed by a mono-lingual analysis @cite_32 .
{ "cite_N": [ "@cite_32" ], "mid": [ "115160895" ], "abstract": [ "In a previous paper, we showed that analyzing citation patterns in the well-known plagiarized thesis by K. T. zu Guttenberg clearly outperformed current detection methods in identifying cross-language plagiarism. However, the experiment was a proof of concept and we did not provide a prototype. This paper presents a fully functional, web-based visualization of citation patterns for this verified cross-language plagiarism case, allowing the user to interactively experience the benefits of citation pattern analysis for plagiarism detection. Using examples from the Guttenberg plagiarism case, we demonstrate that the citation pattern visualization reduces the required examiner effort to verify the extent of plagiarism." ] }
1906.11761
2953786379
Identifying academic plagiarism is a pressing task for educational and research institutions, publishers, and funding agencies. Current plagiarism detection systems reliably find instances of copied and moderately reworded text. However, reliably detecting concealed plagiarism, such as strong paraphrases, translations, and the reuse of nontextual content and ideas is an open research problem. In this paper, we extend our prior research on analyzing mathematical content and academic citations. Both are promising approaches for improving the detection of concealed academic plagiarism primarily in Science, Technology, Engineering and Mathematics (STEM). We make the following contributions: i) We present a two-stage detection process that combines similarity assessments of mathematical content, academic citations, and text. ii) We introduce new similarity measures that consider the order of mathematical features and outperform the measures in our prior research. iii) We compare the effectiveness of the math-based, citation-based, and text-based detection approaches using confirmed cases of academic plagiarism. iv) We demonstrate that the combined analysis of math-based and citation-based content features allows identifying potentially suspicious cases in a collection of 102K STEM documents. Overall, we show that analyzing the similarity of mathematical content and academic citations is a striking supplement for conventional text-based detection approaches for academic literature in the STEM disciplines. The data and code of our study are openly available at https: purl.org hybridPD
For cross-lingual PD, the candidate retrieval stage likewise seems to present an upper bound for the otherwise higher effectiveness of the analysis methods in the detailed analysis stage. Ehsan2016 reported an approach that achieved an @math -score of @math ( @math , @math ) for the detailed analysis of cross-lingual plagiarism in the Webis-TRC-2012 @cite_7 . For candidate retrieval, Ehsan2016a reported a maximum recall of @math using the same corpus @cite_16 . These results suggest that the candidate retrieval step, which is necessary to enable applying semantic, syntactic and cross-lingual PD approaches, currently limits the effectiveness of the PD approaches.
{ "cite_N": [ "@cite_16", "@cite_7" ], "mid": [ "2343777367", "2515051907" ], "abstract": [ "Proposing a candidate retrieval model for cross-lingual plagiarism detectionThe method relies on using two levels of proximity informationProposing a topic-based text segmentation methodComparing the method with other cross-lingual plagiarism detection approachesShowing improvements using text segmentation and positional language models The rapid growth of documents in different languages, the increased accessibility of electronic documents, and the availability of translation tools have caused cross-lingual plagiarism detection research area to receive increasing attention in recent years. The task of cross-language plagiarism detection entails two main steps: candidate retrieval and assessing pairwise document similarity. In this paper we examine candidate retrieval, where the goal is to find potential source documents of a suspicious text. Our proposed method for cross-language plagiarism detection is a keyword-focused approach. Since plagiarism usually happens in parts of the text, there is a requirement to segment the texts into fragments to detect local similarity. Therefore we propose a topic-based segmentation algorithm to convert the suspicious document to a set of related passages. After that, we use a proximity-based model to retrieve documents with the best matching passages. Experiments show promising results for this important phase of cross-language plagiarism detection.", "The Web offers fast and easy access to a wide range of documents in various languages, and translation and editing tools provide the means to create derivative documents fairly easily. This leads to the need to develop effective tools for detecting cross-language plagiarism. Given a suspicious document, cross-language plagiarism detection comprises two main subtasks: retrieving documents that are candidate sources for that document and analyzing those candidates one by one to determine their similarity to the suspicious document. In this paper we focus on the second subtask and introduce a novel approach for assessing cross-language similarity between texts for detecting plagiarized cases. Our proposed approach has two main steps: a vector-based retrieval framework that focuses on high recall, followed by a more precise similarity analysis based on dynamic text alignment. Experiments show that our method outperforms the methods of the best results in PAN-2012 and PAN-2014 in terms of plagdet score. We also show that aligning n-gram units, instead of aligning complete sentences, improves the accuracy of detecting plagiarism." ] }
1906.11761
2953786379
Identifying academic plagiarism is a pressing task for educational and research institutions, publishers, and funding agencies. Current plagiarism detection systems reliably find instances of copied and moderately reworded text. However, reliably detecting concealed plagiarism, such as strong paraphrases, translations, and the reuse of nontextual content and ideas is an open research problem. In this paper, we extend our prior research on analyzing mathematical content and academic citations. Both are promising approaches for improving the detection of concealed academic plagiarism primarily in Science, Technology, Engineering and Mathematics (STEM). We make the following contributions: i) We present a two-stage detection process that combines similarity assessments of mathematical content, academic citations, and text. ii) We introduce new similarity measures that consider the order of mathematical features and outperform the measures in our prior research. iii) We compare the effectiveness of the math-based, citation-based, and text-based detection approaches using confirmed cases of academic plagiarism. iv) We demonstrate that the combined analysis of math-based and citation-based content features allows identifying potentially suspicious cases in a collection of 102K STEM documents. Overall, we show that analyzing the similarity of mathematical content and academic citations is a striking supplement for conventional text-based detection approaches for academic literature in the STEM disciplines. The data and code of our study are openly available at https: purl.org hybridPD
We also showed that analyzing image similarity in academic documents, e.g., the similarity of figures and plots, improves the detection capabilities for concealed forms of AP @cite_30 .
{ "cite_N": [ "@cite_30" ], "mid": [ "2807704784" ], "abstract": [ "Identifying plagiarized content is a crucial task for educational and research institutions, funding agencies, and academic publishers. Plagiarism detection systems available for productive use reliably identify copied text, or near-copies of text, but often fail to detect disguised forms of academic plagiarism, such as paraphrases, translations, and idea plagiarism. To improve the detection capabilities for disguised forms of academic plagiarism, we analyze the images in academic documents as text-independent features. We propose an adaptive, scalable, and extensible image-based plagiarism detection approach suitable for analyzing a wide range of image similarities that we observed in academic documents. The proposed detection approach integrates established image analysis methods, such as perceptual hashing, with newly developed similarity assessments for images, such as ratio hashing and position-aware OCR text matching. We evaluate our approach using 15 image pairs that are representative of the spectrum of image similarity we observed in alleged and confirmed cases of academic plagiarism. We embed the test cases in a collection of 4,500 related images from academic texts. Our detection approach achieved a recall of 0.73 and a precision of 1. These results indicate that our image-based approach can complement other content-based feature analysis approaches to retrieve potential source documents for suspiciously similar content from large collections. We provide our code as open source to facilitate future research on image-based plagiarism detection." ] }
1906.11761
2953786379
Identifying academic plagiarism is a pressing task for educational and research institutions, publishers, and funding agencies. Current plagiarism detection systems reliably find instances of copied and moderately reworded text. However, reliably detecting concealed plagiarism, such as strong paraphrases, translations, and the reuse of nontextual content and ideas is an open research problem. In this paper, we extend our prior research on analyzing mathematical content and academic citations. Both are promising approaches for improving the detection of concealed academic plagiarism primarily in Science, Technology, Engineering and Mathematics (STEM). We make the following contributions: i) We present a two-stage detection process that combines similarity assessments of mathematical content, academic citations, and text. ii) We introduce new similarity measures that consider the order of mathematical features and outperform the measures in our prior research. iii) We compare the effectiveness of the math-based, citation-based, and text-based detection approaches using confirmed cases of academic plagiarism. iv) We demonstrate that the combined analysis of math-based and citation-based content features allows identifying potentially suspicious cases in a collection of 102K STEM documents. Overall, we show that analyzing the similarity of mathematical content and academic citations is a striking supplement for conventional text-based detection approaches for academic literature in the STEM disciplines. The data and code of our study are openly available at https: purl.org hybridPD
In a recent short paper @cite_13 , we extended the idea of citation-based PD. We proposed that mathematical expressions share many characteristics of academic citations and hence are promising nontextual content features to be considered when searching for concealed forms of AP. Similar to academic citations, mathematical expressions are essential components of academic documents in the Science, Technology, Engineering and Mathematics fields. Furthermore, mathematical expressions are independent of natural language text and contain rich semantic information. Additionally, some STEM disciplines, such as mathematics and physics, are known for their comparably sparse use of academic citations @cite_8 . A citation-based analysis alone is, therefore, less likely to reveal potentially suspicious content similarity for these disciplines.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2767810198", "1978868011" ], "abstract": [ "This paper presents, to our knowledge, the first study on analyzing mathematical expressions to detect academic plagiarism. We make the following contributions. First, we investigate confirmed cases of plagiarism to categorize the similarities of mathematical content commonly found in plagiarized publications. From this investigation, we derive possible feature selection and feature comparison strategies for developing math-based detection approaches and a ground truth for our experiments. Second, we create a test collection by embedding confirmed cases of plagiarism into the NTCIR-11 MathIR Task dataset, which contains approx. 60 million mathematical expressions in 105,120 documents from arXiv.org. Third, we develop a first math-based detection approach by implementing and evaluating different feature comparison approaches using an open source parallel data processing pipeline built using the Apache Flink framework. The best performing approach identifies all but two of our real-world test cases at the top rank and achieves a mean reciprocal rank of 0.86. The results show that mathematical expressions are promising text-independent features to identify academic plagiarism in large collections. To facilitate future research on math-based plagiarism detection, we make our source code and data available.", "An analysis of three major problems in the application of bibliometric research performance indicators is made in three separate sections. In the first section, the influence of field-dependent citation practices is analysed. The results indicate that rankings of publications from different fields, based on citation counts, can be affected seriously by differences between citation characteristics in those fields. If certain assumptions hold, one should expect high (short term) citation levels in Biochemistry, Celbiology and Biophysics. Medium citation levels are to be expected in Experimental and Molecular Physics, Physical and Organic Chemistry, Pharmacology and Plant Physiology, and low citation levels in Mathematics, Taxonomy, Pharmacognosy and Inorganic Solid State Chemistry. In the second section time-dependent factors are studied. It is shown that trend-analyses of output and impact based on bibliometric scores can be disturbed by changes in theSCI-database and in publication and citation practices. One of the disturbing factors is shown to be the inclusion of so called Books into theSCI data-base in 1977. Finally, in the third section a case is presented which illustrates the consequences of operating on incomplete bibliometric data in the evaluation of scientific performance. A completeness percentage of 99 for publication data is proposed as a standard in evaluations of the performance of small university research groups)." ] }
1906.11661
2953501443
The task of image generation started to receive some attention from artists and designers to inspire them in new creations. However, exploiting the results of deep generative models such as Generative Adversarial Networks can be long and tedious given the lack of existing tools. In this work, we propose a simple strategy to inspire creators with new generations learned from a dataset of their choice, while providing some control on them. We design a simple optimization method to find the optimal latent parameters corresponding to the closest generation to any input inspirational image. Specifically, we allow the generation given an inspirational image of the user choice by performing several optimization steps to recover optimal parameters from the model's latent space. We tested several exploration methods starting with classic gradient descents to gradient-free optimizers. Many gradient-free optimizers just need comparisons (better worse than another image), so that they can even be used without numerical criterion, without inspirational image, but with only with human preference. Thus, by iterating on one's preferences we could make robust Facial Composite or Fashion Generation algorithms. High resolution of the produced design generations are obtained using progressive growing of GANs. Our results on four datasets of faces, fashion images, and textures show that satisfactory images are effectively retrieved in most cases.
There have been large progress towards the image quality and the stabilization of adversarial training lately. The Wasserstein training objective was introduced along with gradient clipping to prevent too strong gradients @cite_34 . Then the clipping was replaced by a gradient penalty loss @cite_42 . Progressive growing of GAN architectures @cite_27 employs this loss, leading to stable models that reach high resolution in a reasonable training time. Other approaches like @cite_23 also reach high resolution but in our experience with a slower convergence. Class Conditional approaches @cite_19 based on attention mechanism such as SAGAN @cite_20 and spectral normalization @cite_33 , or BigGAN @cite_13 display remarkable results but have not been assessed yet in high resolution image generation.
{ "cite_N": [ "@cite_33", "@cite_42", "@cite_19", "@cite_27", "@cite_23", "@cite_34", "@cite_13", "@cite_20" ], "mid": [ "2963836885", "2605135824", "", "2766527293", "2792263949", "", "2952716587", "2099471712" ], "abstract": [ "One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.", "Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.", "", "We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.", "", "", "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple \"truncation trick,\" allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Frechet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.6.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1906.11661
2953501443
The task of image generation started to receive some attention from artists and designers to inspire them in new creations. However, exploiting the results of deep generative models such as Generative Adversarial Networks can be long and tedious given the lack of existing tools. In this work, we propose a simple strategy to inspire creators with new generations learned from a dataset of their choice, while providing some control on them. We design a simple optimization method to find the optimal latent parameters corresponding to the closest generation to any input inspirational image. Specifically, we allow the generation given an inspirational image of the user choice by performing several optimization steps to recover optimal parameters from the model's latent space. We tested several exploration methods starting with classic gradient descents to gradient-free optimizers. Many gradient-free optimizers just need comparisons (better worse than another image), so that they can even be used without numerical criterion, without inspirational image, but with only with human preference. Thus, by iterating on one's preferences we could make robust Facial Composite or Fashion Generation algorithms. High resolution of the produced design generations are obtained using progressive growing of GANs. Our results on four datasets of faces, fashion images, and textures show that satisfactory images are effectively retrieved in most cases.
The exploration of the latent space of GANs was popularized by the DCGAN work presenting latent space interpolations and arithmetic operation results @cite_30 . Learning a mapping projecting data back in the latent space of GANs has been studied in the context of bi-directional GANs @cite_9 , with an emphasis on a utility in semi-supervised learning. Similarly, image generation may improve zero shot learning tasks @cite_11 . In Fader Networks @cite_40 , image manipulation is made possible by learning an image representation disentangled from its attributes with adversarial training. Recently, the Style-based generator architecture of @cite_17 improved the coherency of the generator's internal representation by creating an intermediate latent space and enforcing feature statistics proximity between neighbor codes. The criterion of feature similarity is borrowed from Texture networks @cite_38 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_11", "@cite_9", "@cite_40", "@cite_17" ], "mid": [ "2173520492", "2952226636", "2771620762", "2412320034", "2962752582", "2904367110" ], "abstract": [ "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.", "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g.Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.", "The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.", "This paper introduces a new encoder-decoder architecture that is trained to reconstruct images by disentangling the salient information of the image and the values of attributes directly in the latent space. As a result, after training, our model can generate different realistic versions of an input image by varying the attribute values. By using continuous attribute values, we can choose how much a specific attribute is perceivable in the generated image. This property could allow for applications where users can modify an image using sliding knobs, like faders on a mixing console, to change the facial expression of a portrait, or to update the color of some objects. Compared to the state-of-the-art which mostly relies on training adversarial networks in pixel space by altering attribute values at train time, our approach results in much simpler training schemes and nicely scales to multiple attributes. We present evidence that our model can significantly change the perceived value of the attributes while preserving the naturalness of images.", "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces." ] }
1906.11661
2953501443
The task of image generation started to receive some attention from artists and designers to inspire them in new creations. However, exploiting the results of deep generative models such as Generative Adversarial Networks can be long and tedious given the lack of existing tools. In this work, we propose a simple strategy to inspire creators with new generations learned from a dataset of their choice, while providing some control on them. We design a simple optimization method to find the optimal latent parameters corresponding to the closest generation to any input inspirational image. Specifically, we allow the generation given an inspirational image of the user choice by performing several optimization steps to recover optimal parameters from the model's latent space. We tested several exploration methods starting with classic gradient descents to gradient-free optimizers. Many gradient-free optimizers just need comparisons (better worse than another image), so that they can even be used without numerical criterion, without inspirational image, but with only with human preference. Thus, by iterating on one's preferences we could make robust Facial Composite or Fashion Generation algorithms. High resolution of the produced design generations are obtained using progressive growing of GANs. Our results on four datasets of faces, fashion images, and textures show that satisfactory images are effectively retrieved in most cases.
A number of works focus on neural visualization of trained networks for image classification @cite_8 @cite_31 @cite_24 . A somehow related task lies in membership inference, where the goal is to determine if an image has been seen for training @cite_6 @cite_14 . The notion of feature similarity in image generation is widely employed, for instance in style transfer @cite_32 , and generation quality assessment @cite_25 . The most similar work in spirit to our image inspiration strategy is the Inference-via-optimization approach defined to evaluate the severity of mode collapse in Unrolled GANs @cite_4 . We can also cite the approach of @cite_39 that also matches vectors in the latent space of GANs with precise pictures at training time using a Nesterov gradient.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_32", "@cite_6", "@cite_39", "@cite_24", "@cite_31", "@cite_25" ], "mid": [ "2949780682", "2554314924", "2952186574", "2475287302", "2890077576", "2963376432", "2963464195", "2963174698", "2783879794" ], "abstract": [ "How can we explain the predictions of a black-box model? In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.", "We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.", "Convolutional neural networks memorize part of their training data, which is why strategies such as data augmentation and drop-out are employed to mitigate overfitting. This paper considers the related question of \"membership inference\", where the goal is to determine if an image was used during training. We consider it under three complementary angles. We show how to detect which dataset was used to train a model, and in particular whether some validation images were used at train time. We then analyze explicit memorization and extend classical random label experiments to the problem of learning a model that predicts if an image belongs to an arbitrary set. Finally, we propose a new approach to infer membership when a few of the top layers are not available or have been fine-tuned, and show that lower layers still carry information about the training samples. To support our findings, we conduct large-scale experiments on Imagenet and subsets of YFCC-100M with modern architectures such as VGG and Resnet.", "Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images. In most applications, GAN models share two aspects in common. On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions. On the other hand, the generator and the discriminator are parametrized in terms of deep convolutional neural networks. The goal of this paper is to disentangle the contribution of these two factors to the success of GANs. In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators without using discriminators, thus avoiding the instability of adversarial optimization problems. Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors.", "Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right---similar to why we study the human brain---and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization, which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network. The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).", "We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), allowing to generate sharp high resolution images from compressed abstract representations. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric reflects perceptual similarity of images much better and, thus, leads to better results. We demonstrate two examples of use cases of the proposed loss: (1) networks that invert the AlexNet convolutional network; (2) a modified version of a variational autoencoder that generates realistic high-resolution random images.", "While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called \"perceptual losses\"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations." ] }
1906.11578
2953838426
Understanding how the visual cortex of the human brain really works is still an open problem for science today. A better understanding of natural intelligence could also benefit object-recognition algorithms based on convolutional neural networks. In this paper we demonstrate the asset of using a shallow residual neural network for this task. The benefit of this approach is that earlier stages of the network can be accurately trained, which allows us to add more layers at the earlier stage. With this additional layer the prediction of the visual brain activity improves from @math (block 1) to @math (last fully connected layer). By training the network for more than 10 epochs this improvement can become even larger.
The challenge is inspired by the initiative to find a Brain-Score @cite_13 , which found a correlation between the ImageNet performance and the Brain-Score. Yet, for the CNNs with the highest performance this correlation becomes less strong. The conclusion of the study was that DenseNet169, CORnet-S and ResNet-101 were the most brain-liked CNNs. Yet, a number of smaller (i.e more shallow) networks performed quite competitive, leaving the road open to better understand the ventral stream with simpler CNNs.
{ "cite_N": [ "@cite_13" ], "mid": [ "2892147425" ], "abstract": [ "The internal representations of early deep artificial neural networks (ANNs) were found to be remarkably similar to the internal neural representations measured experimentally in the primate brain. Here we ask, as deep ANNs have continued to evolve, are they becoming more or less brain-like? ANNs that are most functionally similar to the brain will contain mechanisms that are most like those used by the brain. We therefore developed Brain-Score - a composite of multiple neural and behavioral benchmarks that score any ANN on how similar it is to the brain9s mechanisms for core object recognition - and we deployed it to evaluate a wide range of state-of-the-art deep ANNs. Using this scoring system, we here report that: (1) DenseNet-169, CORnet-S and ResNet-101 are the most brain-like ANNs. (2) There remains considerable variability in neural and behavioral responses that is not predicted by any ANN, suggesting that no ANN model has yet captured all the relevant mechanisms. (3) Extending prior work, we found that gains in ANN ImageNet performance led to gains on Brain-Score. However, correlation weakened at >= 70 top-1 ImageNet performance, suggesting that additional guidance from neuroscience is needed to make further advances in capturing brain mechanisms. (4) We uncovered smaller (i.e. less complex) ANNs that are more brain-like than many of the best-performing ImageNet models, which suggests the opportunity to simplify ANNs to better understand the ventral stream. The scoring system used here is far from complete. However, we propose that evaluating and tracking model-benchmark correspondences through a Brain-Score that is regularly updated with new brain data is an exciting opportunity: experimental benchmarks can be used to guide machine network evolution, and machine networks are mechanistic hypotheses of the brain9s network and thus drive next experiments. To facilitate both of these, we release Brain-Score.org: a platform that hosts the neural and behavioral benchmarks, where ANNs for visual processing can be submitted to receive a Brain-Score and their rank relative to other models, and where new experimental data can be naturally incorporated." ] }
1906.11578
2953838426
Understanding how the visual cortex of the human brain really works is still an open problem for science today. A better understanding of natural intelligence could also benefit object-recognition algorithms based on convolutional neural networks. In this paper we demonstrate the asset of using a shallow residual neural network for this task. The benefit of this approach is that earlier stages of the network can be accurately trained, which allows us to add more layers at the earlier stage. With this additional layer the prediction of the visual brain activity improves from @math (block 1) to @math (last fully connected layer). By training the network for more than 10 epochs this improvement can become even larger.
Another interesting approach are the deep CNNs proposed by Kar @cite_1 . Here they observed that deeper CNNs predicted neural responses better than shallower models, if they have unrolling mechanisms present. The best predictions were from ResNet-50 and ResNet-101. Yet the prediction were for the MEG-data of the IT region, while in this study we want to concentrate on the fMRI-data of both regions.
{ "cite_N": [ "@cite_1" ], "mid": [ "2949512190" ], "abstract": [ "Non-recurrent deep convolutional neural networks (CNNs) are currently the best at modeling core object recognition, a behavior that is supported by the densely recurrent primate ventral stream, culminating in the inferior temporal (IT) cortex. If recurrence is critical to this behavior, then primates should outperform feedforward-only deep CNNs for images that require additional recurrent processing beyond the feedforward IT response. Here we first used behavioral methods to discover hundreds of these ‘challenge’ images. Second, using large-scale electrophysiology, we observed that behaviorally sufficient object identity solutions emerged 30 ms later in the IT cortex for challenge images compared with primate performance-matched ‘control’ images. Third, these behaviorally critical late-phase IT response patterns were poorly predicted by feedforward deep CNN activations. Notably, very-deep CNNs and shallower recurrent CNNs better predicted these late IT responses, suggesting that there is a functional equivalence between additional nonlinear transformations and recurrence. Beyond arguing that recurrent circuits are critical for rapid object identification, our results provide strong constraints for future recurrent model development." ] }
1811.10678
2903485762
Spiking neural networks (SNNs) have garnered a great amount of interest for supervised and unsupervised learning applications. This paper deals with the problem of training multilayer feedforward SNNs. The non-linear integrate-and-fire dynamics employed by spiking neurons make it difficult to train SNNs to output desired spike train in response to a given input. To tackle this, first the problem of training a multilayer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants. Then, an optimization method named Normalized Approximate Descent (NormAD), hand-crafted for such non-convex optimization problems, is employed to derive the iterative synaptic weight update rule. Next it is reformulated for a more efficient implementation, which can also be interpreted to be spatio-temporal error backpropagation. The learning rule is validated by employing it to solve generic spike based training problem as well as a spike based formulation of the XOR problem. Thus, the new algorithm is a key step towards building deep spiking neural networks capable of event-triggered learning.
One of the earliest attempts to demonstrate supervised learning with spiking neurons is the SpikeProp algorithm @cite_34 . However, it is restricted to single spike learning, thereby limiting its information representation capacity . SpikeProp was then extended in @cite_21 to neurons firing multiple spikes. In these studies, the training problem was formulated as an optimization problem with the objective function in terms of the difference between desired and observed spike arrival instants and gradient descent was used to adjust the weights. However, since spike arrival time is a discontinuous function of the synaptic strengths, the optimization problem is non-convex and gradient descent is prone to local minima.
{ "cite_N": [ "@cite_34", "@cite_21" ], "mid": [ "2569813014", "1970109917" ], "abstract": [ "Abstract For a network of spiking neurons that encodes information in the timing of individual spike times, we derive a supervised learning rule, SpikeProp , akin to traditional error-backpropagation. With this algorithm, we demonstrate how networks of spiking neurons with biologically reasonable action potentials can perform complex non-linear classification in fast temporal coding just as well as rate-coded networks. We perform experiments for the classical XOR problem, when posed in a temporal setting, as well as for a number of other benchmark datasets. Comparing the (implicit) number of spiking neurons required for the encoding of the interpolated XOR problem, the trained networks demonstrate that temporal coding is a viable code for fast neural information processing, and as such requires less neurons than instantaneous rate-coding. Furthermore, we find that reliable temporal computation in the spiking networks was only accomplished when using spike response functions with a time constant longer than the coding interval, as has been predicted by theoretical considerations.", "A supervised learning rule for Spiking Neural Networks (SNNs) is presented that can cope with neurons that spike multiple times. The rule is developed by extending the existing SpikeProp algorithm which could only be used for one spike per neuron. The problem caused by the discontinuity in the spike process is counteracted with a simple but effective rule, which makes the learning process more efficient. Our learning rule is successfully tested on a classification task of Poisson spike trains. We also applied the algorithm on a temporal version of the XOR problem and show that it is possible to learn this classical problem using only one spiking neuron making use of a hair-trigger situation." ] }
1811.10678
2903485762
Spiking neural networks (SNNs) have garnered a great amount of interest for supervised and unsupervised learning applications. This paper deals with the problem of training multilayer feedforward SNNs. The non-linear integrate-and-fire dynamics employed by spiking neurons make it difficult to train SNNs to output desired spike train in response to a given input. To tackle this, first the problem of training a multilayer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants. Then, an optimization method named Normalized Approximate Descent (NormAD), hand-crafted for such non-convex optimization problems, is employed to derive the iterative synaptic weight update rule. Next it is reformulated for a more efficient implementation, which can also be interpreted to be spatio-temporal error backpropagation. The learning rule is validated by employing it to solve generic spike based training problem as well as a spike based formulation of the XOR problem. Thus, the new algorithm is a key step towards building deep spiking neural networks capable of event-triggered learning.
The biologically observed spike time dependent plasticity (STDP) has been used to derive weight update rules for SNNs in @cite_1 @cite_6 @cite_35 . ReSuMe and DL-ReSuMe took cues from both STDP as well as the Widrow-Hoff rule to formulate a supervised learning algorithm @cite_1 @cite_0 . Though these algorithms are biologically inspired, the training time necessary to converge is a concern, especially for real-world applications in large networks. The ReSuMe algorithm has been extended to multilayer feedforward SNNs using backpropagation in @cite_7 .
{ "cite_N": [ "@cite_35", "@cite_7", "@cite_1", "@cite_6", "@cite_0" ], "mid": [ "1492596588", "2054113233", "2165639766", "", "2113420865" ], "abstract": [ "We propose a novel network model of spiking neurons, without preimposed topology and driven by STDP (Spike-Time-Dependent Plasticity), a temporal Hebbian unsupervised learning mode, based on biological observations of synaptic plasticity. The model is further driven by a supervised learning algorithm, based on a margin criterion, that has effect on the synaptic delays linking the network to the output neurons, with classification as a goal task. The network processing and the resulting performance are completely explainable by the concept of polychronization, recently introduced by Izhikevich Izh06NComp . On the one hand, our model can be viewed as a new machine learning concept for classifying patterns by means of spiking neuron networks. On the other hand, as a model of natural neural networks, it provides a new insight on cell assemblies, a fundamental notion for understanding the cognitive processes underlying memory. Keywords. Spiking neuron networks, Synaptic plasticity, STDP, Delay learning, Classifier, Cell assemblies, Polychronous groups.", "We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.", "Learning from instructions or demonstrations is a fundamental property of our brain necessary to acquire new knowledge and develop novel skills or behavioral patterns. This type of learning is thought to be involved in most of our daily routines. Although the concept of instruction-based learning has been studied for several decades, the exact neural mechanisms implementing this process remain unrevealed. One of the central questions in this regard is, How do neurons learn to reproduce template signals (instructions) encoded in precisely timed sequences of spikes? Here we present a model of supervised learning for biologically plausible neurons that addresses this question. In a set of experiments, we demonstrate that our approach enables us to train spiking neurons to reproduce arbitrary template spike patterns in response to given synaptic stimuli even in the presence of various sources of noise. We show that the learning rule can also be used for decision-making tasks. Neurons can be trained to classify categories of input signals based on only a temporal configuration of spikes. The decision is communicated by emitting precisely timed spike trains associated with given input categories. Trained neurons can perform the classification task correctly even if stimuli and corresponding decision times are temporally separated and the relevant information is consequently highly overlapped by the ongoing neural activity. Finally, we demonstrate that neurons can be trained to reproduce sequences of spikes with a controllable time shift with respect to target templates. A reproduced signal can follow or even precede the targets. This surprising result points out that spiking neurons can potentially be applied to forecast the behavior (firing times) of other reference neurons or networks.", "", "Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe." ] }
1811.10678
2903485762
Spiking neural networks (SNNs) have garnered a great amount of interest for supervised and unsupervised learning applications. This paper deals with the problem of training multilayer feedforward SNNs. The non-linear integrate-and-fire dynamics employed by spiking neurons make it difficult to train SNNs to output desired spike train in response to a given input. To tackle this, first the problem of training a multilayer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants. Then, an optimization method named Normalized Approximate Descent (NormAD), hand-crafted for such non-convex optimization problems, is employed to derive the iterative synaptic weight update rule. Next it is reformulated for a more efficient implementation, which can also be interpreted to be spatio-temporal error backpropagation. The learning rule is validated by employing it to solve generic spike based training problem as well as a spike based formulation of the XOR problem. Thus, the new algorithm is a key step towards building deep spiking neural networks capable of event-triggered learning.
Another notable spike-domain learning rule is PBSNLR @cite_28 , which is an offline learning rule for the spiking perceptron neuron (SPN) model using the perceptron learning rule. The PSD algorithm @cite_17 uses Widrow-Hoff rule to empirically determine an equivalent learning rule for spiking neurons. The SPAN rule @cite_25 converts input and output spike signals into analog signals and then applies the Widrow-Hoff rule to derive a learning algorithm . black Further, it is applicable to the training of SNNs with only one layer. The SWAT algorithm @cite_13 uses STDP and BCM rule to derive a weight adaptation strategy for SNNs. The Normalized Spiking Error Back-Propagation (NSEBP) method proposed in @cite_11 is based on approximations of the simplified Spike Response Model for the neuron. The multi-STIP algorithm proposed in @cite_36 defines an inner product for spike trains to approximate a learning cost function. black As opposed to the above approaches which attempt to develop weight update rules for fixed network topologies, there are also some efforts in developing feed-forward networks based on evolutionary algorithms where new neuronal connections are progressively added and their weights and firing thresholds updated for every class label in the database @cite_4 @cite_32 . black
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_36", "@cite_17", "@cite_32", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "1987927386", "1971390819", "2512805308", "2130974072", "2133922304", "2170968634", "2154616847", "2341732087" ], "abstract": [ "This paper provides a comprehensive literature survey on the evolving Spiking Neural Network (eSNN) architecture since its introduction in 2006 as a further extension of the ECoS paradigm introduced by Kasabov in 1998. We summarize the functioning of the method, discuss several of its extensions and present a number of applications in which the eSNN method was employed. We focus especially on some proposed extensions that allow the processing of spatio-temporal data and for feature and parameter optimisation of eSNN models to achieve better accuracy on classification prediction problems and to facilitate new knowledge discovery. Finally, some open problems are discussed and future directions highlighted.", "The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.", "Recent advances in neurosciences have revealed that neural information in the brain is encoded through precisely timed spike trains, not only through the neural firing rate. This paper presents a new supervised, multi-spike learning algorithm for multilayer spiking neural networks, which can implement the complex spatio-temporal pattern learning of spike trains. The proposed algorithm firstly defines inner product operators to mathematically describe and manipulate spike trains, and then solves the problems of error function construction and backpropagation among multiple output spikes during learning. The algorithm is successfully applied to different temporal tasks, such as learning sequences of spikes and nonlinear pattern classification problems. The experimental results show that the proposed algorithm has higher learning accuracy and efficiency than the Multi-ReSuMe learning algorithm. It is effective for solving complex spatio-temporal pattern learning problems.", "A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.", "This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.", "This paper presents a synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs). SWAT merges the Bienenstock-Cooper-Munro (BCM) learning rule with spike timing dependent plasticity (STDP). The STDP BCM rule yields a unimodal weight distribution where the height of the plasticity window associated with STDP is modulated causing stability after a period of training. The SNN uses a single training neuron in the training phase where data associated with all classes is passed to this neuron. The rule then maps weights to the classifying output neurons to reflect similarities in the data across the classes. The SNN also includes both excitatory and inhibitory facilitating synapses which create a frequency routing capability allowing the information presented to the network to be routed to different hidden layer neurons. A variable neuron threshold level simulates the refractory period. SWAT is initially benchmarked against the nonlinearly separable Iris and Wisconsin Breast Cancer datasets. Results presented show that the proposed training algorithm exhibits a convergence accuracy of 95.5 and 96.2 for the Iris and Wisconsin training sets, respectively, and 95.3 and 96.7 for the testing sets, noise experiments show that SWAT has a good generalization capability. SWAT is also benchmarked using an isolated digit automatic speech recognition (ASR) system where a subset of the TI46 speech corpus is used. Results show that with SWAT as the classifier, the ASR system provides an accuracy of 98.875 for training and 95.25 for testing.", "Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN — a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow–Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.", "The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper." ] }
1811.10678
2903485762
Spiking neural networks (SNNs) have garnered a great amount of interest for supervised and unsupervised learning applications. This paper deals with the problem of training multilayer feedforward SNNs. The non-linear integrate-and-fire dynamics employed by spiking neurons make it difficult to train SNNs to output desired spike train in response to a given input. To tackle this, first the problem of training a multilayer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants. Then, an optimization method named Normalized Approximate Descent (NormAD), hand-crafted for such non-convex optimization problems, is employed to derive the iterative synaptic weight update rule. Next it is reformulated for a more efficient implementation, which can also be interpreted to be spatio-temporal error backpropagation. The learning rule is validated by employing it to solve generic spike based training problem as well as a spike based formulation of the XOR problem. Thus, the new algorithm is a key step towards building deep spiking neural networks capable of event-triggered learning.
Recently, an algorithm to learn precisely timed spikes using a leaky integrate-and-fire neuron was presented in @cite_14 . The algorithm converges only when a synaptic weight configuration to the given training problem exists, and can not provide a close approximation, if the exact solution does not exist. To overcome this limitation, another algorithm to learn spike sequences with finite precision is also presented in the same paper. It allows a window of width @math around the desired spike instant within which the output spike could arrive and performs training only on the first deviation from such desired behavior. While it mitigates the non-linear accumulation of error due to interaction between output spikes, it also restricts the training to just one discrepancy per iteration. Backpropagation for training deep networks of LIF neurons has been presented in @cite_33 , derived assuming an impulse-shaped post-synaptic current kernel and treating the discontinuities at spike events as noise. It presents remarkable results on MNIST and N-MNIST benchmarks using rate coded outputs, while in the present work we are interested in training multilayer SNNs with temporally encoded outputs i.e., representing information in the timing of spikes.
{ "cite_N": [ "@cite_14", "@cite_33" ], "mid": [ "2171236529", "2513853720" ], "abstract": [ "Summary To signal the onset of salient sensory features or execute well-timed motor sequences, neuronal circuits must transform streams of incoming spike trains into precisely timed firing. To address the efficiency and fidelity with which neurons can perform such computations, we developed a theory to characterize the capacity of feedforward networks to generate desired spike sequences. We find the maximum number of desired output spikes a neuron can implement to be 0.1–0.3 per synapse. We further present a biologically plausible learning rule that allows feedforward and recurrent networks to learn multiple mappings between inputs and desired spike sequences. We apply this framework to reconstruct synaptic weights from spiking activity and study the precision with which the temporal structure of ongoing behavior can be inferred from the spiking of premotor neurons. This work provides a powerful approach for characterizing the computational and learning capacities of single neurons and neuronal circuits.", "Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations." ] }
1811.10681
2903234512
The extraction and matching of interest points is a prerequisite for many geometric computer vision problems. Traditionally, matching has been achieved by assigning descriptors to interest points and matching points that have similar descriptors. In this paper, we propose a method by which interest points are instead already implicitly matched at detection time. With this, descriptors do not need to be calculated, stored, communicated, or matched any more. This is achieved by a convolutional neural network with multiple output channels and can be thought of as a collection of a variety of detectors, each specialized to specific visual features. This paper describes how to design and train such a network in a way that results in successful relative pose estimation performance despite the limitation on interest point count. While the overall matching score is slightly lower than with traditional methods, the approach is descriptor free and thus enables localization systems with a significantly smaller memory footprint and multi-agent localization systems with lower bandwidth requirements. The network also outputs the confidence for a specific interest point resulting in a valid match. We evaluate performance relative to state-of-the-art alternatives.
Once a set of interest points has been extracted in the images, they need to be matched between each other to establish one-to-one correspondences. The simplest approach would be to match points whose surrounding image patches are most similar to each other, but this approach is very fragile to slight changes in illumination and viewpoint. To provide remedy to this variance, descriptors have been introduced. Descriptors are functions of patches, whose output is typically lower-dimensional, but invariant to slight illumination and viewpoint changes, yet still distinctive enough to differ between the different points extracted in one image. A popular class of traditional descriptors is histograms of gradients (HoG) @cite_50 . Another example are binary descriptors, which are particularly efficient to calculate @cite_31 @cite_45 .
{ "cite_N": [ "@cite_31", "@cite_45", "@cite_50" ], "mid": [ "2019085623", "", "2151103935" ], "abstract": [ "Binary descriptors are becoming increasingly popular as a means to compare feature points very fast while requiring comparatively small amounts of memory. The typical approach to creating them is to first compute floating-point ones, using an algorithm such as SIFT, and then to binarize them. In this paper, we show that we can directly compute a binary descriptor, which we call BRIEF, on the basis of simple intensity difference tests. As a result, BRIEF is very fast both to build and to match. We compare it against SURF and SIFT on standard benchmarks and show that it yields comparable recognition accuracy, while running in an almost vanishing fraction of the time required by either.", "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance." ] }
1811.10681
2903234512
The extraction and matching of interest points is a prerequisite for many geometric computer vision problems. Traditionally, matching has been achieved by assigning descriptors to interest points and matching points that have similar descriptors. In this paper, we propose a method by which interest points are instead already implicitly matched at detection time. With this, descriptors do not need to be calculated, stored, communicated, or matched any more. This is achieved by a convolutional neural network with multiple output channels and can be thought of as a collection of a variety of detectors, each specialized to specific visual features. This paper describes how to design and train such a network in a way that results in successful relative pose estimation performance despite the limitation on interest point count. While the overall matching score is slightly lower than with traditional methods, the approach is descriptor free and thus enables localization systems with a significantly smaller memory footprint and multi-agent localization systems with lower bandwidth requirements. The network also outputs the confidence for a specific interest point resulting in a valid match. We evaluate performance relative to state-of-the-art alternatives.
For interest point detection, the cornerness response traditionally calculated for the full image can be calculated using a fully convolutional neural network. Rather than just imitating traditional interest point detectors, CNN-based detectors can be trained to be invariant across different viewpoints @cite_2 , to present consistent ranking in the images in which they are extracted @cite_27 , to provide particularly sharp and thus unambiguous responses @cite_18 , or even to predict the probability of a certain pixel to result in an inlier @cite_11 . A majority of these methods is compared in the recent survey @cite_22 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_27", "@cite_2", "@cite_11" ], "mid": [ "2798971533", "2883130584", "2556970001", "2345643369", "1755205674" ], "abstract": [ "Local feature detection is a fundamental task in computer vision, and hand-crafted feature detectors such as SIFT have shown success in applications including image-based localization and registration. Recent work has used features detected in texture images for precise global localization, but is limited by the performance of existing feature detectors on textures, as opposed to natural images. We propose an effective and scalable method for learning feature detectors for textures, which combines an existing \"ranking\" loss with an efficient fully-convolutional architecture as well as a new training-loss term that maximizes the \"peakedness\" of the response map. We demonstrate that our detector is more repeatable than existing methods, leading to improvements in a real-world texture-based localization application.", "We present a large scale benchmark for the evaluation of local feature detectors. Our key innovation is the introduction of a new evaluation protocol which extends and improves the standard detection repeatability measure. The new protocol is better for assessment on a large number of images and reduces the dependency of the results on unwanted distractors such as the number of detected features and the feature magnification factor. Additionally, our protocol provides a comprehensive assessment of the expected performance of detectors under several practical scenarios. Using images from the recently-introduced HPatches dataset, we evaluate a range of state-of-the-art local feature detectors on two main tasks: viewpoint and illumination invariant detection. Contrary to previous detector evaluations, our study contains an order of magnitude more image sequences, resulting in a quantitative evaluation significantly more robust to over-fitting. We also show that traditional detectors are still very competitive when compared to recent deep-learning alternatives.", "Several machine learning tasks require to represent the data using only a sparse set of interest points. An ideal detector is able to find the corresponding interest points even if the data undergo a transformation typical for a given domain. Since the task is of high practical interest in computer vision, many hand-crafted solutions were proposed. In this paper, we ask a fundamental question: can we learn such detectors from scratch? Since it is often unclear what points are interesting, human labelling cannot be used to find a truly unbiased solution. Therefore, the task requires an unsupervised formulation. We are the first to propose such a formulation: training a neural network to rank points in a transformation-invariant manner. Interest points are then extracted from the top bottom quantiles of this ranking. We validate our approach on two tasks: standard RGB image interest point detection and challenging cross-modal interest point detection between RGB and depth images. We quantitatively show that our unsupervised method performs better or on-par with baselines.", "Local covariant feature detection, namely the problem of extracting viewpoint invariant features from images, has so far largely resisted the application of machine learning techniques. In this paper, we propose the first fully general formulation for learning local covariant feature detectors. We propose to cast detection as a regression problem, enabling the use of powerful regressors such as deep neural networks. We then derive a covariance constraint that can be used to automatically learn which visual structures provide stable anchors for local feature detection. We support these ideas theoretically, proposing a novel analysis of local features in term of geometric transformations, and we show that all common and many uncommon detectors can be derived in this framework. Finally, we present empirical results on translation and rotation covariant detectors on standard feature benchmarks, showing the power and flexibility of the framework.", "This paper presents a novel two-frame motion estimation algorithm. The first step is to approximate each neighborhood of both frames by quadratic polynomials, which can be done efficiently using the polynomial expansion transform. From observing how an exact polynomial transforms under translation a method to estimate displacement fields from the polynomial expansion coefficients is derived and after a series of refinements leads to a robust algorithm. Evaluation on the Yosemite sequence shows good results." ] }
1811.10681
2903234512
The extraction and matching of interest points is a prerequisite for many geometric computer vision problems. Traditionally, matching has been achieved by assigning descriptors to interest points and matching points that have similar descriptors. In this paper, we propose a method by which interest points are instead already implicitly matched at detection time. With this, descriptors do not need to be calculated, stored, communicated, or matched any more. This is achieved by a convolutional neural network with multiple output channels and can be thought of as a collection of a variety of detectors, each specialized to specific visual features. This paper describes how to design and train such a network in a way that results in successful relative pose estimation performance despite the limitation on interest point count. While the overall matching score is slightly lower than with traditional methods, the approach is descriptor free and thus enables localization systems with a significantly smaller memory footprint and multi-agent localization systems with lower bandwidth requirements. The network also outputs the confidence for a specific interest point resulting in a valid match. We evaluate performance relative to state-of-the-art alternatives.
Finally, CNNs have also been shown to be useful for spatial normalization @cite_43 or affine region detection @cite_13 .
{ "cite_N": [ "@cite_43", "@cite_13" ], "mid": [ "2951005624", "2949213045" ], "abstract": [ "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.", "In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets." ] }
1811.10681
2903234512
The extraction and matching of interest points is a prerequisite for many geometric computer vision problems. Traditionally, matching has been achieved by assigning descriptors to interest points and matching points that have similar descriptors. In this paper, we propose a method by which interest points are instead already implicitly matched at detection time. With this, descriptors do not need to be calculated, stored, communicated, or matched any more. This is achieved by a convolutional neural network with multiple output channels and can be thought of as a collection of a variety of detectors, each specialized to specific visual features. This paper describes how to design and train such a network in a way that results in successful relative pose estimation performance despite the limitation on interest point count. While the overall matching score is slightly lower than with traditional methods, the approach is descriptor free and thus enables localization systems with a significantly smaller memory footprint and multi-agent localization systems with lower bandwidth requirements. The network also outputs the confidence for a specific interest point resulting in a valid match. We evaluate performance relative to state-of-the-art alternatives.
There have been some previous attempts to significantly reduce the amount of data associated with descriptors. In @cite_37 , the authors replace descriptors with word identifiers of the corresponding visual word in a Bag-of-Words visual vocabulary @cite_39 . This can be used jointly with Bag-of-Words place recognition in order to facilitate multi-agent relative pose estimation with minimal data exchange. In @cite_4 , the authors propose highly compressed maps for visual-inertial localization in which binary descriptors are projected down to as little as one byte. In contrast, our approach circumvents the use of any explicit descriptor, by implicitly embedding a form of descriptor in the learned detection algorithm itself.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_39" ], "mid": [ "2220727758", "2278591674", "2131846894" ], "abstract": [ "The performance of any cooperative task that involves two or more robots will be determined by their capacity to recognize common information of the environment. Vision sensors are very effective for this particular goal, but the cost of transmitting the visual information represents a real issue, even more if communication must be performed in narrow bandwidth networks and or over a multi-hop path. Visual vocabularies provide a dimensionality reduction that has been effectively used in computer vision to reduce the computational load of performing searches in large volumes of data. In this paper we propose to exploit the same technique to decrease the volume of information that is exchanged in the network. This way, robots do not need to send the full descriptors associated to the features they observe, but only the word indices of the corresponding features in the vocabulary. Experiments with a wide variety of vocabularies are used to evaluate the quality of the association given by the algorithm. Finally, real experiments in a wireless network with a limited bandwidth are reported, showing the advantages of the proposed method compared to the communication of full images or feature descriptors.", "Accurately estimating a robot's pose relative to a global scene model and precisely tracking the pose in real-time is a fundamental problem for navigation and obstacle avoidance tasks. Due to the computational complexity of localization against a large map and the memory consumed by the model, state-of-the-art approaches are either limited to small workspaces or rely on a server-side system to query the global model while tracking the pose locally. The latter approaches face the problem of smoothly integrating the server's pose estimates into the trajectory computed locally to avoid temporal discontinuities. In this paper, we demonstrate that large-scale, real-time pose estimation and tracking can be performed on mobile platforms with limited resources without the use of an external server. This is achieved by employing map and descriptor compression schemes as well as efficient search algorithms from computer vision. We derive a formulation for integrating the global pose information into a local state estimator that produces much smoother trajectories than current approaches. Through detailed experiments, we evaluate each of our design choices individually and document its impact on the overall system performance, demonstrating that our approach outperforms state-of-the-art algorithms for localization at scale.", "We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films." ] }
1811.10915
2949717039
In order to prevent the disclosure of privacy-sensitive data, such as names and relations between users, social network graphs have to be anonymised before publication. Naive anonymisation of social network graphs often consists in deleting all identifying information of the users, while maintaining the original graph structure. Various types of attacks on naively anonymised graphs have been developed. Active attacks form a special type of such privacy attacks, in which the adversary enrols a number of fake users, often called sybils, to the social network, allowing the adversary to create unique structural patterns later used to re-identify the sybil nodes and other users after anonymisation. Several studies have shown that adding a small amount of noise to the published graph already suffices to mitigate such active attacks. Consequently, active attacks have been dubbed a negligible threat to privacy-preserving social graph publication. In this paper, we argue that these studies unveil shortcomings of specific attacks, rather than inherent problems of active attacks as a general strategy. In order to support this claim, we develop the notion of a robust active attack, which is an active attack that is resilient to small perturbations of the social network graph. We formulate the design of robust active attacks as an optimisation problem and we give definitions of robustness for different stages of the active attack strategy. Moreover, we introduce various heuristics to achieve these notions of robustness and experimentally show that the new robust attacks are considerably more resilient than the original ones, while remaining at the same level of feasibility.
In the context of obfuscation methods, which aim to publish a new version of the social graph with randomly added perturbations, @cite_30 assess the possibility of the attacker leveraging the knowledge about the noise generation to launch what they call a attack. In their work, provided accurate estimators for several graph parameters in the noisy graphs, to support the claim that useful computations can be conducted on the graphs after adding noise. Among these estimators, they included one for the degree sequence of the graph. Then, noting that an active attacker can indeed profit from this estimator to strengthen the walk-based attack, they show that after increasing the perturbation by a sufficiently small amount this attack also fails. Although the probabilistic attack presented in @cite_30 features some limited level of noise resilience, it is not usable as a general strategy, because it requires the noise to follow a specific distribution and the parameters of this distribution to be known by the adversary. Our definition of robust attack makes no assumptions about the type of perturbation applied to the graph.
{ "cite_N": [ "@cite_30" ], "mid": [ "1997421642" ], "abstract": [ "Social network data analysis raises concerns about the privacy of related entities or individuals. To address this issue, organizations can publish data after simply replacing the identities of individuals with pseudonyms, leaving the overall structure of the social network unchanged. However, it has been shown that attacks based on structural identification (e.g., a walk-based attack) enable an adversary to re-identify selected individuals in an anonymized network. In this paper we explore the capacity of techniques based on random edge perturbation to thwart such attacks. We theoretically establish that any kind of structural identification attack can effectively be prevented using random edge perturbation and show that, surprisingly, important properties of the whole network, as well as of subgraphs thereof, can be accurately calculated and hence data analysis tasks performed on the perturbed data, given that the legitimate data recipient knows the perturbation probability as well. Yet we also examine ways to enhance the walk-based attack, proposing a variant we call probabilistic attack. Nevertheless, we demonstrate that such probabilistic attacks can also be prevented under sufficient perturbation. Eventually, we conduct a thorough theoretical study of the probability of success of any structural attack as a function of the perturbation probability. Our analysis provides a powerful tool for delineating the identification risk of perturbed social network data; our extensive experiments with synthetic and real datasets confirm our expectations." ] }
1811.10915
2949717039
In order to prevent the disclosure of privacy-sensitive data, such as names and relations between users, social network graphs have to be anonymised before publication. Naive anonymisation of social network graphs often consists in deleting all identifying information of the users, while maintaining the original graph structure. Various types of attacks on naively anonymised graphs have been developed. Active attacks form a special type of such privacy attacks, in which the adversary enrols a number of fake users, often called sybils, to the social network, allowing the adversary to create unique structural patterns later used to re-identify the sybil nodes and other users after anonymisation. Several studies have shown that adding a small amount of noise to the published graph already suffices to mitigate such active attacks. Consequently, active attacks have been dubbed a negligible threat to privacy-preserving social graph publication. In this paper, we argue that these studies unveil shortcomings of specific attacks, rather than inherent problems of active attacks as a general strategy. In order to support this claim, we develop the notion of a robust active attack, which is an active attack that is resilient to small perturbations of the social network graph. We formulate the design of robust active attacks as an optimisation problem and we give definitions of robustness for different stages of the active attack strategy. Moreover, we introduce various heuristics to achieve these notions of robustness and experimentally show that the new robust attacks are considerably more resilient than the original ones, while remaining at the same level of feasibility.
Finally, we point out that the active attack strategy shares some similarities with graph watermarking methods, e.g. @cite_19 @cite_23 @cite_18 . The purpose of graph watermarking is to release a graph containing embedded instances of a small subgraph, the , that can be easily retrieved by the graph publisher, while remaining imperceptible to others and being hard to remove or distort. Note that the goals of the graph owner and the adversary are to some extent inverted in graph watermarking, with respect to active attacks. Moreover, since the graph owner knows the entire graph, he can profit from this knowledge for building the watermark. During the sybil subgraph creation phase of an active attack, only a partial view of the social graph is available to the attacker.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_23" ], "mid": [ "2524442961", "2963136930", "2052987786" ], "abstract": [ "In this paper, we discuss graph-theoretic approaches to software watermarking and fingerprinting. Software watermarking is used to discourage intellectual property theft and software fingerprinting is used to trace intellectual property copyright violations. We focus on two algorithms that encode information in software through the use of graph structures. We then consider the different attack models intended to disable the watermark while not affecting the correctness or performance of the program. Finally, we present several classes of graphs that can be used for watermarking and fingerprinting and analyze their properties (resiliency, data rate, performance, and stealthiness).", "We introduce models and algorithmic foundations for graph watermarking. Our approach is based on characterizing the feasibility of graph watermarking in terms of keygen, marking, and identification functions defined over graph families with known distributions. We demonstrate the strength of this approach with exemplary watermarking schemes for two random graph models, the classic Erdős-Renyi model and a random power-law graph model, both of which are used to model real-world networks.", "From network topologies to online social networks, many of today's most sensitive datasets are captured in large graphs. A significant challenge facing the data owners is how to share sensitive graphs with collaborators or authorized users, e.g. ISP's network topology graphs with a third party networking equipment vendor. Current tools can provide limited node or edge privacy, but significantly modify the graph reducing its utility. In this work, we propose a new alternative in the form of graph watermarks. Graph watermarks are small graphs tailor-made for a given graph dataset, a secure graph key, and a secure user key. To share a sensitive graph G with a collaborator C, the owner generates a watermark graph W using G, the graph key, and C's key as input, and embeds W into G to form G'. If G' is leaked by C, its owner can reliably determine if the watermark W generated for C does in fact reside inside G', thereby proving C is responsible for the leak. Graph watermarks serve both as a deterrent against data leakage and a method of recourse after a leak. We provide robust schemes for embedding and extracting watermarks, and use analysis and experiments on large real graphs to show that they are unique and difficult to forge. We study the robustness of graph watermarks against both single and powerful colluding attacker models, then propose and evaluate mechanisms to dramatically improve resilience." ] }
1811.10762
2902485742
Frame duplication is to duplicate a sequence of consecutive frames and insert or replace to conceal or imitate a specific event content in the same source video. To automatically detect the duplicated frames in a manipulated video, we propose a coarse-to-fine deep convolutional neural network framework to detect and localize the frame duplications. We first run an I3D network to obtain the most candidate duplicated frame sequences and selected frame sequences, and then run a Siamese network with ResNet network to identify each pair of a duplicated frame and the corresponding selected frame. We also propose a heuristic strategy to formulate the video-level score. We then apply our inconsistency detector fine-tuned on the I3D network to distinguish duplicated frames from selected frames. With the experimental evaluation conducted on two video datasets, we strongly demonstrate that our proposed method outperforms the current state-of-the-art methods.
Inter-frame forgery refers to consecutive frame deletion and consecutive frame duplication. For features which are copied, either spatially or temporally. Keypoints are remarkable nearby patches recognized over distinctive scales. Keypoint-based methodologies can be further subdivided into classifications: direction based @cite_11 @cite_10 , keyframe-based coordinating @cite_20 and visual words based @cite_30 . In particular, keyframe based feature has been indicated to display incredible execution for close video picture feature identification @cite_20 .
{ "cite_N": [ "@cite_30", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2154946238", "", "2022303741", "2248282706" ], "abstract": [ "The Digital Forgeries though not visibly identifiable to human perception it may alter or meddle with underlying natural statistics of digital content. Tampering involves fiddling with video content in order to cause damage or make unauthorized alteration modification. Tampering detection in video is cumbersome compared to image when considering the properties of the video. Tampering impacts need to be studied and the applied technique method is used to establish the factual information for legal course in judiciary. In this paper we give an overview of the prior literature and challenges involved in video forgery detection where passive approach is found.", "", "This paper presents an efficient approach for copies detection in a large videos archive consisting of several hundred of hours. The video content indexing method consists of extracting the dynamic behavior on the local description of interest points and further on the estimation of their trajectories along the video sequence. Analyzing the low-level description obtained allows to highlight trends of behaviors and then to assign a label of behavior to each local descriptor. Such an indexing approach has several interesting properties: it provides a rich, compact and generic description, while labels of behavior provide a high-level description of the video content. Here, we focus on video Content Based Copy Detection (CBCD). Copy detection is problematic as similarity search problem but with prominent differences. To be efficient, it requires a dedicated on-line retrieval method based on a specific voting function. This voting function must be robust to signal transformations and discriminating versus high similarities which are not copies. The method we propose in this paper is a dedicated on-line retrieval method based on a combination of the different dynamic contexts computed during the off-line indexing. A spatio-temporal registration based on the relevant combination of detected labels is then applied. This approach is evaluated using a huge video database of 300 hours with different video tests. The method is compared to a state-of-the art technique in the same conditions. We illustrate that taking labels into account in the specific voting process reduces false alarms significantly and drastically improves the precision.", "A video copy detection system is a content-based search engine [1]. It aims at deciding whether a query video segment is a copy of a video from the indexed dataset or not. A copy may be distorted in various ways. If the system finds a matching video segment, it returns the name of the database video and the time stamp where the query was copied from. Fig. 1 illustrates the video copyright detection system we have developed for the TRECVID 2008 evaluation campaign. The components of this system are detailed in Section 2. Most of them are derived from the state-of-the-art image search engine introduced in [2]. It builds upon the bag-of-features image search system proposed in [3], and provides a more precise representation by adding 1) a Hamming embedding and 2) weak geometric consistency constraints. The HE provides binary signatures that refine the visual word based matching. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all indexed frames, even for a very large dataset. In our best runs, we have indexed 2 million keyframes, represented by 800 million local descriptors. We give some conclusions drawn from our experiments in Section 3. Finally, in section 4 we briefly present our run for the high-level feature detection task." ] }
1811.10762
2902485742
Frame duplication is to duplicate a sequence of consecutive frames and insert or replace to conceal or imitate a specific event content in the same source video. To automatically detect the duplicated frames in a manipulated video, we propose a coarse-to-fine deep convolutional neural network framework to detect and localize the frame duplications. We first run an I3D network to obtain the most candidate duplicated frame sequences and selected frame sequences, and then run a Siamese network with ResNet network to identify each pair of a duplicated frame and the corresponding selected frame. We also propose a heuristic strategy to formulate the video-level score. We then apply our inconsistency detector fine-tuned on the I3D network to distinguish duplicated frames from selected frames. With the experimental evaluation conducted on two video datasets, we strongly demonstrate that our proposed method outperforms the current state-of-the-art methods.
Recently, Huang @cite_12 proposed a fusion of audio forensics detection methods for video inter-frame forgery. Zhao @cite_23 developed a similarity analysis based method to detect inter-frame forgery in a video shot. In this method, the HSV color histogram is calculated to detect and locate tampered frames in the shot, and then the SURF feature extraction and FLANN (Fast Library for Approximate Nearest Neighbors) matching are used for further confirmation.
{ "cite_N": [ "@cite_23", "@cite_12" ], "mid": [ "2798793675", "2800664266" ], "abstract": [ "Despite recent emergence of video caption methods, how to generate fine-grained video descriptions (i.e., long and detailed commentary about individual movements of multiple subjects as well as their frequent interactions) is far from being solved, which however has great applications such as automatic sports narrative. To this end, this work makes the following contributions. First, to facilitate this novel research of fine-grained video caption, we collected a novel dataset called Fine-grained Sports Narrative dataset (FSN) that contains 2K sports videos with ground-truth narratives from YouTube.com. Second, we develop a novel performance evaluation metric named Fine-grained Captioning Evaluation (FCE) to cope with this novel task. Considered as an extension of the widely used METEOR, it measures not only the linguistic performance but also whether the action details and their temporal orders are correctly described. Third, we propose a new framework for fine-grained sports narrative task. This network features three branches: 1) a spatio-temporal entity localization and role discovering sub-network; 2) a fine-grained action modeling sub-network for local skeleton motion description; and 3) a group relationship modeling sub-network to model interactions between players. We further fuse the features and decode them into long narratives by a hierarchically recurrent structure. Extensive experiments on the FSN dataset demonstrates the validity of the proposed framework for fine-grained video caption.", "Abstract The forgery operation of digital video in the temporal domain is often accompanied by the synchronization of the audio channel operation. In this paper, we proposed a fusion of audio forensics detection methods for video inter-frame forgery. First, the audio channel of the video is extracted, and discrete wavelet packet decomposition and analysis of singularity points of audio signals are used to locate the forged singularity points. Next, features of each frame of the video are extracted with the perceptual hash and used to calculate the similarity between consecutive frames, to locate the forgery position in the video frame sequence. We fused the results of the audio channel and the video frame sequence channel. The QDCT feature is used to further fine detect the suspected forgery location. Our method can position replication source locations for copy-move forgery. Experiments show that our method has higher accuracy and better performance in comparison with similar methods, especially on the delete forgery operation." ] }
1811.10762
2902485742
Frame duplication is to duplicate a sequence of consecutive frames and insert or replace to conceal or imitate a specific event content in the same source video. To automatically detect the duplicated frames in a manipulated video, we propose a coarse-to-fine deep convolutional neural network framework to detect and localize the frame duplications. We first run an I3D network to obtain the most candidate duplicated frame sequences and selected frame sequences, and then run a Siamese network with ResNet network to identify each pair of a duplicated frame and the corresponding selected frame. We also propose a heuristic strategy to formulate the video-level score. We then apply our inconsistency detector fine-tuned on the I3D network to distinguish duplicated frames from selected frames. With the experimental evaluation conducted on two video datasets, we strongly demonstrate that our proposed method outperforms the current state-of-the-art methods.
Copy-move forgery is created by copying and pasting content within the same image, and potentially post-processing it @cite_3 . Different methodologies have high reckoning time and not suitable for ongoing applications, for example, PCA, DWT, or SVD. For example, Wang @cite_18 propose a dimensionality diminishment based system and utilizes PCA (Principal Component Analysis) on the different pieces in a substitute mode. The drawback is that for dark scale pictures furthermore forms each shading direct in shading pictures and PCA is for recognition the fakes. Mohamadian @cite_5 develop a Singular Value Decomposition (SVD) based method in which the image is isolated into numerous little covering squares and after that SVD is requested to remove the copied frames. Its shortcoming is that the method is not for shading pictures.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_3" ], "mid": [ "2170650480", "2741379522", "2149073238" ], "abstract": [ "In this paper we propose a robust and fully automatic method to detect duplicated regions in digital images. Copied areas in uniform and non-uniform regions are detectable in our method. There are several methods to make forged images, but the most common is copy-move forgery, that the forger copies a part(s) of image and pasted it into another part(s) of that image. Many researchers have done beneficial researches on it. Most of them can find only copy-moved forgery. In other words, they can find regions which are only copied and pasted without any changes, but failed to find copied regions with scaling or rotating before pasting. Many forgers make some changes on copied regions so that the image sounds more natural. SIFT features can find forged regions even if they are rotated or scaled. This method has some other advantages, but failed to find flat copied regions. Zernike moments are invariant against rotation. They can find flat copied regions too, but sensitive to scaling. So it is clear that using these two features is very proper to detect all copied regions in an image. By applying SIFT detection method on overall the image and then Zernike moments detection method on regions where SIFT features have not been found, The processing time is reduced in comparison to applying each of these two methods on entire image.", "Region duplication forgery, in which a part of a digital image is copied and then pasted to another portion of the same image, is one of the simple and common image forgery techniques. Most of the existing algorithms are not robust to the post region duplication image processing, and have high time complexity. In this paper, we describe an e-cient and robust algorithm for detecting and localizing this type of malicious tampering. The image is flrst reduced in dimension by Gaussian pyramid, and the Hu moment is applied to the flxed sized overlapping blocks of low-frequency image. The eigenvectors are lexicographically sorted. Then, similar eigenvectors are matched by a certain threshold value. Finally, the area threshold value is proposed to remove the wrong similar blocks. The mathematical morphology operations are performed to locate the tampered part. Experimental results show that our method is robust and that it can not only successfully detect this type of tampering for images subject to various forms of post region duplication image processing, including noise contamination, blurring, and severe lossy compression, but also reduce the total number of blocks to narrow block-matching searching space, which can improve the method e-ciency.", "A copy-move forgery is created by copying and pasting content within the same image, and potentially postprocessing it. In recent years, the detection of copy-move forgeries has become one of the most actively researched topics in blind image forensics. A considerable number of different algorithms have been proposed focusing on different types of postprocessed copies. In this paper, we aim to answer which copy-move forgery detection algorithms and processing steps (e.g., matching, filtering, outlier detection, affine transformation estimation) perform best in various postprocessing scenarios. The focus of our analysis is to evaluate the performance of previously proposed feature sets. We achieve this by casting existing algorithms in a common pipeline. In this paper, we examined the 15 most prominent feature sets. We analyzed the detection performance on a per-image basis and on a per-pixel basis. We created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation. Experiments show, that the keypoint-based features Sift and Surf, as well as the block-based DCT, DWT, KPCA, PCA, and Zernike features perform very well. These feature sets exhibit the best robustness against various noise sources and downsampling, while reliably identifying the copied regions." ] }
1811.10762
2902485742
Frame duplication is to duplicate a sequence of consecutive frames and insert or replace to conceal or imitate a specific event content in the same source video. To automatically detect the duplicated frames in a manipulated video, we propose a coarse-to-fine deep convolutional neural network framework to detect and localize the frame duplications. We first run an I3D network to obtain the most candidate duplicated frame sequences and selected frame sequences, and then run a Siamese network with ResNet network to identify each pair of a duplicated frame and the corresponding selected frame. We also propose a heuristic strategy to formulate the video-level score. We then apply our inconsistency detector fine-tuned on the I3D network to distinguish duplicated frames from selected frames. With the experimental evaluation conducted on two video datasets, we strongly demonstrate that our proposed method outperforms the current state-of-the-art methods.
Recently, Yang @cite_25 proposed a copy-move forgery detection based on a modified SIFT-based detector. Wang @cite_0 presented a novel block-based robust copy-move forgery detection approach using invariant quaternion exponent moments and the falsely matched block pairs are removed by customizing the random sample consensus with QEMs magnitudes differences. It is robust to handle noise addition, lossy compression, scaling, and rotation, when compared to conventional copy-move forgeries detection techniques.
{ "cite_N": [ "@cite_0", "@cite_25" ], "mid": [ "2540421039", "2570989757" ], "abstract": [ "The detection of forgeries in color images is a very important topic in forensic science. Copy–move (or copy–paste) forgery is the most common form of tampering associated with color images. Conventional copy–move forgeries detection techniques usually suffer from the problems of false positives and susceptibility to many signal processing operations. It is a challenging work to design a robust copy–move forgery detection method. In this paper, we present a novel block-based robust copy–move forgery detection approach using invariant quaternion exponent moments (QEMs). Firstly, original tempered color image is preprocessed with Gaussian low-pass filter, and the filtered color image is divided into overlapping circular blocks. Then, the accurate and robust feature descriptor, QEMs modulus, is extracted from color image block holistically as a vector field. Finally, exact Euclidean locality sensitive hashing is utilized to find rapidly the matching blocks, and the falsely matched block pairs are removed by customizing the random sample consensus with QEMs magnitudes differences. Extensive experimental results show the efficacy of the newly proposed approach in detecting copy–paste forgeries under various challenging conditions, such as noise addition, lossy compression, scaling, and rotation. We obtain the average forgery detection accuracy (F-measure) in excess of 96 and 88 across postprocessing operations, at image level and at pixel level, respectively.", "A very common way of image tampering is the copy-move attack. When creating a copy-move forgery, it is often necessary to add or remove important objects from an image. To carry out forensic analysis of such images, various copy-move forgery detection (CMFD) methods have been developed in the literatures. In recent years, many feature-based CMFD approaches have emerged due to its excellent robustness to various transformations. However there is still place to improve performance further. Many of them would suffer from the problem of insufficient matched key-points while performing on the mirror transformed forgeries. Furthermore, many feature-based methods might hardly expose the tempering when the forged region is of uniform texture. In this paper, a novel feature-based CMFD method is proposed. Key-points are detected by using a modified SIFT-based detector. A novel key-points distribution strategy is developed for interspersing the key-points evenly throughout an image. Finally, key-points are descripted by an improved SIFT descriptor which is enhanced for the CMFD scenario. Extensive experimental results are presented to confirm the efficacy." ] }
1811.10907
2950618454
Diffusion is commonly used as a ranking or re-ranking method in retrieval tasks to achieve higher retrieval performance, and has attracted lots of attention in recent years. A downside to diffusion is that it performs slowly in comparison to the naive k-NN search, which causes a non-trivial online computational cost on large datasets. To overcome this weakness, we propose a novel diffusion technique in this paper. In our work, instead of applying diffusion to the query, we pre-compute the diffusion results of each element in the database, making the online search a simple linear combination on top of the k-NN search process. Our proposed method becomes 10 times faster in terms of online search speed. Moreover, we propose to use late truncation instead of early truncation in previous works to achieve better retrieval performance.
Although originally developed for ranking on manifolds @cite_2 @cite_9 @cite_0 , diffusion was soon applied to classification @cite_8 , and image segmentation @cite_13 . In the field of image retrieval, it is most frequently used as a re-ranking method @cite_16 @cite_1 .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_2", "@cite_16", "@cite_13" ], "mid": [ "2154455818", "2131791003", "2963154697", "2142992480", "1854214752", "2559091987", "2125637308" ], "abstract": [ "We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data.", "The Google search engine has enjoyed huge success with its web page ranking algorithm, which exploits global, rather than local, hyperlink structure of the web using random walks. Here we propose a simple universal ranking algorithm for data lying in the Euclidean space, such as text or image data. The core idea of our method is to rank the data with respect to the intrinsic manifold structure collectively revealed by a great amount of data. Encouraging experimental results from synthetic, image, and text data illustrate the validity of our method.", "In this paper we address issues with image retrieval benchmarking on standard and popular Oxford 5k and Paris 6k datasets. In particular, annotation errors, the size of the dataset, and the level of challenge are addressed: new annotation for both datasets is created with an extra attention to the reliability of the ground truth. Three new protocols of varying difficulty are introduced. The protocols allow fair comparison between different methods, including those using a dataset pre-processing stage. For each dataset, 15 new challenging queries are introduced. Finally, a new set of 1M hard, semi-automatically cleaned distractors is selected. An extensive1 comparison of the state-of-the-art methods is performed on the new benchmark. Different types of methods are evaluated, ranging from local-feature-based to modern CNN based methods. The best results are achieved by taking the best of the two worlds. Most importantly, image retrieval appears far from being solved.", "In this paper we revisit diffusion processes on affinity graphs for capturing the intrinsic manifold structure defined by pair wise affinity matrices. Such diffusion processes have already proved the ability to significantly improve subsequent applications like retrieval. We give a thorough overview of the state-of-the-art in this field and discuss obvious similarities and differences. Based on our observations, we are then able to derive a generic framework for diffusion processes in the scope of retrieval applications, where the related work represents specific instances of our generic formulation. We evaluate our framework on several retrieval tasks and are able to derive algorithms that e. , g. achieve a 100 bulls eye score on the popular MPEG7 shape retrieval data set.", "The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.", "Query expansion is a popular method to improve the quality of image retrieval with both conventional and CNN representations. It has been so far limited to global image similarity. This work focuses on diffusion, a mechanism that captures the image manifold in the feature space. An efficient off-line stage allows optional reduction in the number of stored regions. In the on-line stage, the proposed handling of unseen queries in the indexing stage removes additional computation to adjust the precomputed data. We perform diffusion through a sparse linear system solver, yielding practical query times well below one second. Experimentally, we observe a significant boost in performance of image retrieval with compact CNN descriptors on standard benchmarks, especially when the query object covers only a small part of the image. Small objects have been a common failure case of CNN-based retrieval.", "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs" ] }
1811.10907
2950618454
Diffusion is commonly used as a ranking or re-ranking method in retrieval tasks to achieve higher retrieval performance, and has attracted lots of attention in recent years. A downside to diffusion is that it performs slowly in comparison to the naive k-NN search, which causes a non-trivial online computational cost on large datasets. To overcome this weakness, we propose a novel diffusion technique in this paper. In our work, instead of applying diffusion to the query, we pre-compute the diffusion results of each element in the database, making the online search a simple linear combination on top of the k-NN search process. Our proposed method becomes 10 times faster in terms of online search speed. Moreover, we propose to use late truncation instead of early truncation in previous works to achieve better retrieval performance.
Query expansion, a common technique in image retrieval, can improve retrieval performance during query time. Average query expansion (AQE) @cite_15 @cite_16 , a popular type of query expansion because of its simplicity, averages the features of the query's nearest neighbors to form a new query to run search again. When AQE is applied iteratively, the recomputation of the query is akin to traveling along the manifolds of the feature space. Although this traversal is similar to diffusion, AQE only utilizes the relationships between query and database images, but not between each of the database images with each other. With prior knowledge of the relationships between all of the database images, diffusion is thus better able to exploit the manifolds in the feature space than query expansion can.
{ "cite_N": [ "@cite_15", "@cite_16" ], "mid": [ "2100398441", "2559091987" ], "abstract": [ "Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases.", "Query expansion is a popular method to improve the quality of image retrieval with both conventional and CNN representations. It has been so far limited to global image similarity. This work focuses on diffusion, a mechanism that captures the image manifold in the feature space. An efficient off-line stage allows optional reduction in the number of stored regions. In the on-line stage, the proposed handling of unseen queries in the indexing stage removes additional computation to adjust the precomputed data. We perform diffusion through a sparse linear system solver, yielding practical query times well below one second. Experimentally, we observe a significant boost in performance of image retrieval with compact CNN descriptors on standard benchmarks, especially when the query object covers only a small part of the image. Small objects have been a common failure case of CNN-based retrieval." ] }
1811.10907
2950618454
Diffusion is commonly used as a ranking or re-ranking method in retrieval tasks to achieve higher retrieval performance, and has attracted lots of attention in recent years. A downside to diffusion is that it performs slowly in comparison to the naive k-NN search, which causes a non-trivial online computational cost on large datasets. To overcome this weakness, we propose a novel diffusion technique in this paper. In our work, instead of applying diffusion to the query, we pre-compute the diffusion results of each element in the database, making the online search a simple linear combination on top of the k-NN search process. Our proposed method becomes 10 times faster in terms of online search speed. Moreover, we propose to use late truncation instead of early truncation in previous works to achieve better retrieval performance.
In previous works of diffusion, the query is provided as a part of the database. However, in a real-world setting, queries are unavailable until they are issued by users. To tackle this issue without introducing any computational overhead, @cite_16 uses the short list of @math -NN search results to form a sparse initial state vector, instead of using a one-hot vector as the initial state. As a consequence, queries are not included in the neighborhood graph. The downside to this is that the graph needs to be stored and loaded during the search stage for random walk, which is both memory and computationally inefficient. Since the previous methods were evaluated on the Oxford @cite_3 and Paris @cite_11 datasets, smaller datasets only containing 55 queries, the inefficiency of those methods did not have much impact on the total computation time. When these methods are used on large-scale datasets with many queries, the inefficiency during online search becomes magnified and intractable.
{ "cite_N": [ "@cite_16", "@cite_3", "@cite_11" ], "mid": [ "2559091987", "2141362318", "2148809531" ], "abstract": [ "Query expansion is a popular method to improve the quality of image retrieval with both conventional and CNN representations. It has been so far limited to global image similarity. This work focuses on diffusion, a mechanism that captures the image manifold in the feature space. An efficient off-line stage allows optional reduction in the number of stored regions. In the on-line stage, the proposed handling of unseen queries in the indexing stage removes additional computation to adjust the precomputed data. We perform diffusion through a sparse linear system solver, yielding practical query times well below one second. Experimentally, we observe a significant boost in performance of image retrieval with compact CNN descriptors on standard benchmarks, especially when the query object covers only a small part of the image. Small objects have been a common failure case of CNN-based retrieval.", "In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.", "The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index." ] }