aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.02584
2953945697
We propose a fast, model agnostic method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes. We show that class prototypes, obtained using either an encoder or through class specific k-d trees, significantly speed up the the search for counterfactual instances and result in more interpretable explanations. We introduce two novel metrics to quantitatively evaluate local interpretability at the instance level. We use these metrics to illustrate the effectiveness of our method on an image and tabular dataset, respectively MNIST and Breast Cancer Wisconsin (Diagnostic). The method also eliminates the computational bottleneck that arises because of numerical gradient evaluation for @math models.
Incorporating prototypes in the objective function leads to more interpretable counterfactuals. We introduce two novel metrics which focus on local interpretability with respect to the training data distribution. This is different from @cite_26 who define an interpretability metric relative to a target model. @cite_23 on the other hand quantify interpretability through a human pilot study measuring the accuracy and efficiency of the humans on a predictive task. @cite_21 also highlight the importance of good local data representations in order to generate high quality explanations.
{ "cite_N": [ "@cite_26", "@cite_21", "@cite_23" ], "mid": [ "2625712209", "2946940672", "2551974706" ], "abstract": [ "We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking it to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods in our framework portraying its general applicability. Finally, principled interpretable strategies are proposed and empirically evaluated on synthetic data, as well as on the largest public olfaction dataset that was made recently available olfs . We also experiment on MNIST with a simple target model and different oracle models of varying complexity. This leads to the insight that the improvement in the target model is not only a function of the oracle models performance, but also its relative complexity with respect to the target model.", "Explaining decisions of deep neural networks is a hot research topic with applications in medical imaging, video surveillance, and self driving cars. Many methods have been proposed in literature to explain these decisions by identifying relevance of different pixels. In this paper, we propose a method that can generate contrastive explanations for such data where we not only highlight aspects that are in themselves sufficient to justify the classification by the deep model, but also new aspects which if added will change the classification. One of our key contributions is how we define \"addition\" for such rich data in a formal yet humanly interpretable way that leads to meaningful results. This was one of the open questions laid out in Dhurandhar this http URL. (2018) [5], which proposed a general framework for creating (local) contrastive explanations for deep models. We showcase the efficacy of our approach on CelebA and Fashion-MNIST in creating intuitive explanations that are also quantitatively superior compared with other state-of-the-art interpretability methods.", "Example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions. However, prototypes alone are rarely sufficient to represent the gist of the complexity. In order for users to construct better mental models and understand complex data distributions, we also need criticism to explain what are captured by prototypes. Motivated by the Bayesian model criticism framework, we develop MMD-critic which efficiently learns prototypes and criticism, designed to aid human interpretability. A human subject pilot study shows that the MMD-critic selects prototypes and criticism that are useful to facilitate human understanding and reasoning. We also evaluate the prototypes selected by MMD-critic via a nearest prototype classifier, showing competitive performance compared to baselines." ] }
1907.02684
2952015385
Head-driven phrase structure grammar (HPSG) enjoys a uniform formalism representing rich contextual syntactic and even semantic meanings. This paper makes the first attempt to formulate a simplified HPSG by integrating constituent and dependency formal representations into head-driven phrase structure. Then two parsing algorithms are respectively proposed for two converted tree representations, division span and joint span. As HPSG encodes both constituent and dependency structure information, the proposed HPSG parsers may be regarded as a sort of joint decoder for both types of structures and thus are evaluated in terms of extracted or converted constituent and dependency parsing trees. Our parser achieves new state-of-the-art performance for both parsing tasks on Penn Treebank (PTB) and Chinese Penn Treebank, verifying the effectiveness of joint learning constituent and dependency structures. In details, we report 95.84 F1 of constituent parsing and 97.00 UAS of dependency parsing on PTB.
In the earlier time, linguists and NLP researchers discussed how to encode lexical dependencies in phrase structures, like lexicalized tree adjoining grammar (LTAG) @cite_18 and head-driven phrase structure grammar (HPSG) @cite_2 which is a constraint-based highly lexicalized non-derivational generative grammar framework.
{ "cite_N": [ "@cite_18", "@cite_2" ], "mid": [ "2131986285", "2038248725" ], "abstract": [ "In this paper we present a general parsing strategy that arose from the development of an Earley-type parsing algorithm for TAGs (Schabes and Joshi 1988) and from recent linguistic work in TAGs (Abeille 1988).In our approach elementary structures are associated with their lexical heads. These structures specify extended domains of locality (as compared to a context-free grammar) over which constraints can be stated. These constraints either hold within the elementary structure itself or specify what other structures can be composed with a given elementary structure.We state the conditions under which context-free based grammars can be 'lexicalized' without changing the linguistic structures originally produced. We argue that even if one extends the domain of locality of CFGs to trees, using only substitution does not give the freedom to choose the head of each structure. We show how adjunction allows us to 'lexicalize' a CFG freely.We then show how a 'lexicalized' grammar naturally follows from the extended domain of locality of TAGs and present some of the linguistic advantages of our approach.A novel general parsing strategy for 'lexicalized' grammars is discussed. In a first stage, the parser builds a set structures corresponding to the input sentence and in a second stage, the sentence is parsed with respect to this set. The strategy is independent of the linguistic theory adopted and of the underlying grammar formalism. However, we focus our attention on TAGs. Since the set of trees needed to parse an input sentence is supposed to be finite, the parser can use in principle any search strategy. Thus, in particular, a top-down strategy can be used since problems due to recursive structures are eliminated. The parser is also able to use non-local information to guide the search.We then explain how the Earley-type parser for TAGs can be modified to take advantage of this approach.", "This book presents the most complete exposition of the theory of head-driven phrase structure grammar (HPSG), introduced in the authors' \"Information-Based Syntax and Semantics.\" HPSG provides an integration of key ideas from the various disciplines of cognitive science, drawing on results from diverse approaches to syntactic theory, situation semantics, data type theory, and knowledge representation. The result is a conception of grammar as a set of declarative and order-independent constraints, a conception well suited to modelling human language processing. This self-contained volume demonstrates the applicability of the HPSG approach to a wide range of empirical problems, including a number which have occupied center-stage within syntactic theory for well over twenty years: the control of \"understood\" subjects, long-distance dependencies conventionally treated in terms of \"wh\"-movement, and syntactic constraints on the relationship between various kinds of pronouns and their antecedents. The authors make clear how their approach compares with and improves upon approaches undertaken in other frameworks, including in particular the government-binding theory of Noam Chomsky." ] }
1907.02684
2952015385
Head-driven phrase structure grammar (HPSG) enjoys a uniform formalism representing rich contextual syntactic and even semantic meanings. This paper makes the first attempt to formulate a simplified HPSG by integrating constituent and dependency formal representations into head-driven phrase structure. Then two parsing algorithms are respectively proposed for two converted tree representations, division span and joint span. As HPSG encodes both constituent and dependency structure information, the proposed HPSG parsers may be regarded as a sort of joint decoder for both types of structures and thus are evaluated in terms of extracted or converted constituent and dependency parsing trees. Our parser achieves new state-of-the-art performance for both parsing tasks on Penn Treebank (PTB) and Chinese Penn Treebank, verifying the effectiveness of joint learning constituent and dependency structures. In details, we report 95.84 F1 of constituent parsing and 97.00 UAS of dependency parsing on PTB.
Meanwhile, since HPSG represents the grammar framework in a precisely constrained way, it is difficult to broadly cover unseen real-world texts for parsing. Consequently, according to @cite_60 , many of these large-scale grammar implementations are forced to choose to either compromise the linguistic preciseness or to accept the low coverage in parsing. Previous works of HPSG approximation focus on two major approaches: grammar based approach @cite_20 , and the corpus-driven approach @cite_5 and @cite_60 which proposes PCFG approximation as a way to alleviate some of these issues in HPSG processing.
{ "cite_N": [ "@cite_5", "@cite_20", "@cite_60" ], "mid": [ "2069403915", "2097730343", "2155558996" ], "abstract": [ "We present a simple and intuitive unsound corpus-driven approximation method for turning unification-based grammars, such as HPSG, CLE, or PATR-II into context-free grammars (CFGs). Our research is motivated by the idea that we can exploit (large-scale), hand-written unification grammars not only for the purpose of describing natural language and obtaining a syntactic structure (and perhaps a semantic form), but also to address several other very practical topics. Firstly, to speed up deep parsing by having a cheap recognition pre-flter (the approximated CFG). Secondly, to obtain an indirect stochastic parsing model for the unification grammar through a trained PCFG, obtained from the approximated CFG. This gives us an efficient disambiguation model for the unification-based grammar. Thirdly, to generate domain-specific subgrammars for application areas such as information extraction or question answering. And finally, to compile context-free language models which assist the acoustic model of a speech recognizer. The approximation method is unsound in that it does not generate a CFG whose language is a true superset of the language accepted by the original unification-based grammar. It is a corpus-driven method in that it relies on a corpus of parsed sentences and generates broader CFGs when given more input samples. Our open approach can be fine-tuned in different directions, allowing us to monotonically come close to the original parse trees by shifting more information into the context-free symbols. The approach has been fully implemented in JAVA.", "We present a simple and intuitive approximation method for turning unification-based grammars into context-free grammars. We apply our method to several grammars and report on the quality of the approximation. We also present several methods that speed up the approximation process and that might be interesting to other areas of unification-based processing. Finally, we introduce a novel disambiguation method for unification grammars which is based on probabilistic context-free approximations.", "We present a novel corpus-driven approach towards grammar approximation for a linguistically deep Head-driven Phrase Structure Grammar. With an unlexicalized probabilistic context-free grammar obtained by Maximum Likelihood Estimate on a large-scale automatically annotated corpus, we are able to achieve parsing accuracy higher than the original HPSG-based model. Different ways of enriching the annotations carried by the approximating PCFG are proposed and compared. Comparison to the state-of-the-art latent-variable PCFG shows that our approach is more suitable for the grammar approximation task where training data can be acquired automatically. The best approximating PCFG achieved ParsEv-al F1 accuracy of 84.13 . The high robustness of the PCFG suggests it is a viable way of achieving full coverage parsing with the hand-written deep linguistic grammars." ] }
1907.02844
2954922979
Geodesic distance is the shortest path between two points in a Riemannian manifold. Manifold learning algorithms, such as Isomap, seek to learn a manifold that preserves geodesic distances. However, such methods operate on the ambient dimensionality, and are therefore fragile to noise dimensions. We developed an unsupervised random forest method (URerF) to approximately learn geodesic distances in linear and nonlinear manifolds with noise. URerF operates on low-dimensional sparse linear combinations of features, rather than the full observed dimensionality. To choose the optimal split in a computationally efficient fashion, we developed a fast Bayesian Information Criterion statistic for Gaussian mixture models. We introduce geodesic precision-recall curves which quantify performance relative to the true latent manifold. Empirical results on simulated and real data demonstrate that URerF is robust to high-dimensional noise, where as other methods, such as Isomap, UMAP, and FLANN, quickly deteriorate in such settings. In particular, URerF is able to estimate geodesic distances on a real connectome dataset better than other approaches.
Nonlinear manifold learning approaches, such as Isomap @cite_18 , Laplacian eigenmaps @cite_0 and UMAP @cite_19 , are designed to preserve geodesic distances, and even directly estimate them. Specifically, they follow a three-step process. First, they estimate geodesic distances in the original manifold. This is done by initially constructing a @math -nearest neighbor or @math -neighborhood graph in which the observations (data points) correspond to nodes, and pairwise Euclidean distances between these points correspond to the weights on the edges. Second, the all-pairs shortest paths of the nodes in the graph are computed. Third, the points are embedded in a lower dimensional space that ideally preserves these distances. This approach is significantly hampered by the first step, which operates in the original high-dimensional ambient space, since Euclidean distances often fail to provide good estimates of distances on the manifold. Moreover, given @math datapoints, computing all pairwise distances is @math space and time, and all pairwise shortest paths can require @math , both of which can be cost prohibitive for large sample sizes.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_18" ], "mid": [ "2156718197", "2786672974", "2001141328" ], "abstract": [ "Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.", "UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP as described has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.", "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure." ] }
1907.02844
2954922979
Geodesic distance is the shortest path between two points in a Riemannian manifold. Manifold learning algorithms, such as Isomap, seek to learn a manifold that preserves geodesic distances. However, such methods operate on the ambient dimensionality, and are therefore fragile to noise dimensions. We developed an unsupervised random forest method (URerF) to approximately learn geodesic distances in linear and nonlinear manifolds with noise. URerF operates on low-dimensional sparse linear combinations of features, rather than the full observed dimensionality. To choose the optimal split in a computationally efficient fashion, we developed a fast Bayesian Information Criterion statistic for Gaussian mixture models. We introduce geodesic precision-recall curves which quantify performance relative to the true latent manifold. Empirical results on simulated and real data demonstrate that URerF is robust to high-dimensional noise, where as other methods, such as Isomap, UMAP, and FLANN, quickly deteriorate in such settings. In particular, URerF is able to estimate geodesic distances on a real connectome dataset better than other approaches.
One of the most widely used methods for nonlinear dimensionality reduction is Isomap @cite_18 . Isomap is one of the few manifold learning algorithms that has theoretical guarantees for correctly estimating the manifold under certain assumptions @cite_23 . In the case of many noisy dimensions, however, Isomap fails to construct an accurate nearest-neighbor graphs on the latent manifold. Moreover, Isomap requires storing all point-to-point graph distances, which incurs space and time complexity quadratic in the sample size.
{ "cite_N": [ "@cite_18", "@cite_23" ], "mid": [ "2001141328", "2156287497" ], "abstract": [ "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.", "Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine the advantages of the global approach with what have previously been exclusive advantages of local methods: computational sparsity and the ability to invert conformal maps." ] }
1907.02844
2954922979
Geodesic distance is the shortest path between two points in a Riemannian manifold. Manifold learning algorithms, such as Isomap, seek to learn a manifold that preserves geodesic distances. However, such methods operate on the ambient dimensionality, and are therefore fragile to noise dimensions. We developed an unsupervised random forest method (URerF) to approximately learn geodesic distances in linear and nonlinear manifolds with noise. URerF operates on low-dimensional sparse linear combinations of features, rather than the full observed dimensionality. To choose the optimal split in a computationally efficient fashion, we developed a fast Bayesian Information Criterion statistic for Gaussian mixture models. We introduce geodesic precision-recall curves which quantify performance relative to the true latent manifold. Empirical results on simulated and real data demonstrate that URerF is robust to high-dimensional noise, where as other methods, such as Isomap, UMAP, and FLANN, quickly deteriorate in such settings. In particular, URerF is able to estimate geodesic distances on a real connectome dataset better than other approaches.
Approximate nearest neighbors algorithms, such as FLANN @cite_20 , approximate nearest-neighbors in high-dimensional data sets, typically by building binary space-partitioning trees, such as @math -d trees. These algorithms are designed to estimate the distances in the observed high-dimensional space. When the true manifold is low-dimensional, and the data are high-dimensional, the additional noise dimensions will be problematic for any of these algorithms. On the other hand, these approaches can achieve near linear space and time complexity.
{ "cite_N": [ "@cite_20" ], "mid": [ "2086504823" ], "abstract": [ "For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching." ] }
1907.02844
2954922979
Geodesic distance is the shortest path between two points in a Riemannian manifold. Manifold learning algorithms, such as Isomap, seek to learn a manifold that preserves geodesic distances. However, such methods operate on the ambient dimensionality, and are therefore fragile to noise dimensions. We developed an unsupervised random forest method (URerF) to approximately learn geodesic distances in linear and nonlinear manifolds with noise. URerF operates on low-dimensional sparse linear combinations of features, rather than the full observed dimensionality. To choose the optimal split in a computationally efficient fashion, we developed a fast Bayesian Information Criterion statistic for Gaussian mixture models. We introduce geodesic precision-recall curves which quantify performance relative to the true latent manifold. Empirical results on simulated and real data demonstrate that URerF is robust to high-dimensional noise, where as other methods, such as Isomap, UMAP, and FLANN, quickly deteriorate in such settings. In particular, URerF is able to estimate geodesic distances on a real connectome dataset better than other approaches.
This work is inspired by, and closely related to, random projection trees for manifold learning @cite_17 and vector quantization @cite_14 . The main differences between our approach and theirs is (1) that they use random splits, rather than optimizing the splits; and (2) they use a single tree, whereas URerF uses a forest of many trees. Nonetheless, their theoretical analysis motivates the geodesic precision metric we establish for quantifying performance of geodesic learning.
{ "cite_N": [ "@cite_14", "@cite_17" ], "mid": [ "2117250207", "2118123209" ], "abstract": [ "A simple and computationally efficient scheme for tree-structured vector quantization is presented. Unlike previous methods, its quantization error depends only on the intrinsic dimension of the data distribution, rather than the apparent dimension of the space in which the data happen to lie.", "We present a simple variant of the k-d tree which automatically adapts to intrinsic low dimensional structure in data without having to explicitly learn this structure." ] }
1907.02844
2954922979
Geodesic distance is the shortest path between two points in a Riemannian manifold. Manifold learning algorithms, such as Isomap, seek to learn a manifold that preserves geodesic distances. However, such methods operate on the ambient dimensionality, and are therefore fragile to noise dimensions. We developed an unsupervised random forest method (URerF) to approximately learn geodesic distances in linear and nonlinear manifolds with noise. URerF operates on low-dimensional sparse linear combinations of features, rather than the full observed dimensionality. To choose the optimal split in a computationally efficient fashion, we developed a fast Bayesian Information Criterion statistic for Gaussian mixture models. We introduce geodesic precision-recall curves which quantify performance relative to the true latent manifold. Empirical results on simulated and real data demonstrate that URerF is robust to high-dimensional noise, where as other methods, such as Isomap, UMAP, and FLANN, quickly deteriorate in such settings. In particular, URerF is able to estimate geodesic distances on a real connectome dataset better than other approaches.
Finally, most closely related to our method are existing unsupervised random forest methods, the most popular of which is included in Adele Cutler's RandomForest R package @cite_33 . It proceeds by generating a synthetic copy of the data by randomly permuting each feature independently of the others, and then attempts to classify the real versus the synthetic dataset. As will be seen below, this approach leads to missing surprisingly easy latent structures.
{ "cite_N": [ "@cite_33" ], "mid": [ "2021833436" ], "abstract": [ "A random forest (RF) predictor is an ensemble of individual tree predictors. As part of their construction, RF predictors naturally lead to a dissimilarity measure between the observations. One can also define an RF dissimilarity measure between unlabeled data: the idea is to construct an RF predictor that distinguishes the “observed” data from suitably generated synthetic data. The observed data are the original unlabeled data and the synthetic data are drawn from a reference distribution. Here we describe the properties of the RF dissimilarity and make recommendations on how to use it in practice.An RF dissimilarity can be attractive because it handles mixed variable types well, is invariant to monotonic transformations of the input variables, and is robust to outlying observations. The RF dissimilarity easily deals with a large number of variables due to its intrinsic variable selection; for example, the Addcl 1 RF dissimilarity weighs the contribution of each variable according to how dependent it is ..." ] }
1812.08247
2904996883
Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. deepfakes'. Leveraging large training sets and extensive computing resources, recent work has shown that GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation, and show that the network's treatment of color is markedly different from a real camera in two ways. We further show that these two cues can be used to distinguish GAN-generated imagery from camera imagery, demonstrating effective discrimination between GAN imagery and real camera images used to train the GAN.
Since their introduction in 2014 @cite_6 , GANs have quickly become an extremely valuable tool in a range of computer vision applications. At a high level, the concept of a GAN is that two networks are trained to compete with one another. The generator' network is trained to produce artificial imagery that is indistinguishable from a given dataset of real imagery, whereas the discriminator' is trained to correctly classify imagery as being either real or coming from the generator. Early attempts at this @cite_15 were able to generate convincing imagery of simple image datasets such as MNIST digits @cite_2 , but had a harder time mimicking more complicated images. More recently, computational techniques have been introduced which can generate convincing facial imagery @cite_9 and have increased the resolution of generated imagery @cite_0 .
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_0", "@cite_2", "@cite_15" ], "mid": [ "2766527293", "2099471712", "2963800363", "2310919327", "2432004435" ], "abstract": [ "We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.", "", "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes." ] }
1812.08226
2904013300
In this paper, we investigate the possibility of applying plan transformations to general manipulation plans in order to specialize them to the specific situation at hand. We present a framework for optimizing execution and achieving higher performance by autonomously transforming robot's behavior at runtime. We show that plans employed by robotic agents in real-world environments can be transformed, despite their control structures being very complex due to the specifics of acting in the real world. The evaluation is carried out on a plan of a PR2 robot performing pick and place tasks, to which we apply three example transformations, as well as on a large amount of experiments in a fast plan projection environment.
From the area of automatic program transformations, Sussman's HACKER @cite_0 is a system which can change its programs when encountering a bug, and the knowledge from discovering the bug can be generalized and stored for future use. Transformations in HACKER result in programs with equivalent semantics. In robotics applications, transformations are applied to behavior specifications that have to be executable in perception-action loops, in which noisy sensor data and partial observability affect program semantics. Transformational planners such as Hammond's CHEF @cite_5 and Simmons's GORDIUS @cite_8 repair failing plans by searching for causal explanations of why a failure happened and replacing invalid assumptions in the plan.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_8" ], "mid": [ "1578830675", "2019270863", "164718692" ], "abstract": [ "", "Abstract A persistent problem in machine planning is that of repairing plans that fail. One approach to this problem that has been shown to be quite powerful is based on the idea that detailed descriptions of the causes of a failure can be used to decide between the different repairs that can be applied. This paper presents an approach to repair in which plan failures are described in terms of causal explanations of why they occurred. These domain-level explanations are used to access abstract repair strategies, which are then used to make specific changes to the faulty plans. The approach is demonstrated using examples from CHEF, a case-based planner that creates and debugs plans in the domain of Szechwan cooking. While the approach discussed here is examined in terms of actual plan failures, this technique can also be used in the repair of plans that are discovered to be faulty prior to their actual running.", "We present a theory of debugging applicable for planning and interpretation problems. The debugger analyzes causal explanations for why a bug arises to locate the underlying assumptions upon which the bug depends. A bug is repaired by replacing assumptions, using a small set of domain-independent debugging strategies that reason about the causal explanations and domain models that encode the effects of events. Our analysis of the planning and interpretation tasks indicates that only a small set of assumptions and associated repair strategies are needed to handle a wide range of bugs over a large class of domains. Our debugging approach extends previous work in both debugging and domain-independent planning. The approach, however, is computationally expensive and so is used in the context of the Generate, Test and Debug paradigm, in which the debugger is used only if the heuristic generator produces an incorrect hypothesis." ] }
1812.08226
2904013300
In this paper, we investigate the possibility of applying plan transformations to general manipulation plans in order to specialize them to the specific situation at hand. We present a framework for optimizing execution and achieving higher performance by autonomously transforming robot's behavior at runtime. We show that plans employed by robotic agents in real-world environments can be transformed, despite their control structures being very complex due to the specifics of acting in the real world. The evaluation is carried out on a plan of a PR2 robot performing pick and place tasks, to which we apply three example transformations, as well as on a large amount of experiments in a fast plan projection environment.
In multi-robot systems, Bothelho and Alami @cite_4 present an approach for an architecture, where autonomous robots can cooperatively enhance their execution performance by allowing them to detect and recover from failures, focusing on resource conflicts between the robots. implemented a distributed architecture @cite_6 , which supports plan repair operations for failures concerning more than one robot. The repair strategies are kept as local as possible to avoid unnecessary multi-robot communications.
{ "cite_N": [ "@cite_4", "@cite_6" ], "mid": [ "2067776455", "2069721309" ], "abstract": [ "Program transformation is used in a wide range of applications including compiler construction, optimization, program synthesis, refactoring, software renovation, and reverse engineering. Complex program transformations are achieved through a number of consecutive modications of a program. Transformation rules dene basic modications. A transformation strategy is an algorithm for choosing a path in the rewrite relation induced by a set of rules. This paper surveys the support for the denition of strategies in program transformation systems. After a discussion of kinds of program transformation and choices in program representation, the basic elements of a strategy system are discussed and the choices in the design of a strategy language are considered. Several styles of strategy systems as provided in existing languages are then analyzed.", "This paper presents HiDDeN, a high-level distributed architecture for multi-robot cooperation. HiDDeN aims at controlling a team of heterogeneous robots in environments with uncertain communications. It relies on a mission plan defined as an instantiated HTN, i.e. a hierarchical decomposition of robots' tasks. This hierarchical structure also benefits to plan repair operations in case of failure detections. This repair is made as local as possible, in order to avoid unnecessary communications between robots." ] }
1812.08352
2904407897
In this paper, we introduce a new task - interactive image editing via conversational language, where users can guide an agent to edit images via multi-turn natural language dialogue. In each dialogue turn, the agent takes a source image and a natural language description from the user as the input and generates a new image following the textual description. Two new datasets are introduced for this task, Zap-Seq, and DeepFashion-Seq. We propose a novel Sequential Attention Generative Adversarial Network (SeqAttnGAN) framework, which applies a neural state tracker to encode both the source image and the textual description in each dialogue turn and generates high-quality new image consistent with both the preceding images and the dialogue context. To achieve better region-specific text-to-image generation, we also introduce an attention mechanism into the model. Experiments on the two new datasets show that the proposed SeqAttnGAN model outperforms state-of-the-art (SOTA) approaches on the dialogue-based image editing task. Detailed quantitative evaluation and user study also demonstrate that our model is more effective than SOTA baselines on image generation, in terms of both visual quality and text-to-image consistency.
Language-based image editing @cite_24 @cite_3 is a task designed for minimizing labor work while helping users create visual data. Specifically, systems that can perform automatic image editing should be able to understand which part of the image that the user is referring to. This is a very challenging task, which requires comprehensive understanding of both natural language and visual information. Following this thread, several studies have explored the task. Hu @cite_26 tackled the language-based image segmentation task, taking phrase as the input. Ramesh @cite_3 developed a system using simple language to modify the image, where a classification model is utilized to understand user intent. Wang @cite_5 proposed a neural model for global image editing.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_26", "@cite_3" ], "mid": [ "2770776392", "2897946384", "2302548814", "" ], "abstract": [ "We investigate the problem of Language-Based Image Editing (LBIE) in this work. Given a source image and a natural language description, we want to generate a target image by editing the source im- age based on the description. We propose a generic modeling framework for two sub-tasks of LBIE: language-based image segmentation and image colorization. The framework uses recurrent attentive models to fuse image and language features. Instead of using a fixed step size, we introduce for each re- gion of the image a termination gate to dynamically determine in each inference step whether to continue extrapolating additional information from the textual description. The effectiveness of the framework has been validated on three datasets. First, we introduce a synthetic dataset, called CoSaL, to evaluate the end-to-end performance of our LBIE system. Second, we show that the framework leads to state-of-the- art performance on image segmentation on the ReferIt dataset. Third, we present the first language-based colorization result on the Oxford-102 Flowers dataset, laying the foundation for future research.", "We show how we can globally edit images using textual instructions: given a source image and a textual instruction for the edit, generate a new image transformed under this instruction. To tackle this novel problem, we develop three different trainable models based on RNN and Generative Adversarial Network (GAN). The models (bucket, filter bank, and end-to-end) differ in how much expert knowledge is encoded, with the most general version being purely end-to-end. To train these systems, we use Amazon Mechanical Turk to collect textual descriptions for around 2000 image pairs sampled from several datasets. Experimental results evaluated on our dataset validate our approaches. In addition, given that the filter bank model is a good compromise between generality and performance, we investigate it further by replacing RNN with Graph RNN, and show that Graph RNN improves performance. To the best of our knowledge, this is the first computational photography work on global image editing that is purely based on free-form textual instructions.", "In this paper we approach the novel problem of segmenting an image based on a natural language expression. This is different from traditional semantic segmentation over a predefined set of semantic classes, as e.g., the phrase “two men sitting on the right bench” requires segmenting only the two people on the right bench and no one standing or sitting on another bench. Previous approaches suitable for this task were limited to a fixed set of categories and or rectangular regions. To produce pixelwise segmentation for the language expression, we propose an end-to-end trainable recurrent and convolutional network model that jointly learns to process visual and linguistic information. In our model, a recurrent neural network is used to encode the referential expression into a vector representation, and a fully convolutional network is used to a extract a spatial feature map from the image and output a spatial response map for the target object. We demonstrate on a benchmark dataset that our model can produce quality segmentation output from the natural language expression, and outperforms baseline methods by a large margin.", "" ] }
1812.08352
2904407897
In this paper, we introduce a new task - interactive image editing via conversational language, where users can guide an agent to edit images via multi-turn natural language dialogue. In each dialogue turn, the agent takes a source image and a natural language description from the user as the input and generates a new image following the textual description. Two new datasets are introduced for this task, Zap-Seq, and DeepFashion-Seq. We propose a novel Sequential Attention Generative Adversarial Network (SeqAttnGAN) framework, which applies a neural state tracker to encode both the source image and the textual description in each dialogue turn and generates high-quality new image consistent with both the preceding images and the dialogue context. To achieve better region-specific text-to-image generation, we also introduce an attention mechanism into the model. Experiments on the two new datasets show that the proposed SeqAttnGAN model outperforms state-of-the-art (SOTA) approaches on the dialogue-based image editing task. Detailed quantitative evaluation and user study also demonstrate that our model is more effective than SOTA baselines on image generation, in terms of both visual quality and text-to-image consistency.
Since the introduction of GANs @cite_11 , there has been a surge of interest in image generation tasks. In the conditional GAN space, there have been some studies on generating images from images @cite_30 , captions @cite_32 attributes @cite_20 , and object-patch @cite_7 . There were also studies on how to parameterize the models and training framework @cite_19 beyond the vanilla GAN @cite_38 . Zhang @cite_21 stacked several GANs for text-to-image synthesis, with different GANs to generate images of different sizes. In these studies, the image is synthesized on the context level but is not region-specific.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_7", "@cite_21", "@cite_32", "@cite_19", "@cite_20", "@cite_11" ], "mid": [ "2552465644", "2950776302", "2796322794", "2964024144", "", "2125389028", "2585027717", "" ], "abstract": [ "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.", "State-of-the-art pedestrian detection models have achieved great success in many benchmarks. However, these models require lots of annotation information and the labeling process usually takes much time and efforts. In this paper, we propose a method to generate labeled pedestrian data and adapt them to support the training of pedestrian detectors. The proposed framework is built on the Generative Adversarial Network (GAN) with multiple discriminators, trying to synthesize realistic pedestrians and learn the background context simultaneously. To handle the pedestrians of different sizes, we adopt the Spatial Pyramid Pooling (SPP) layer in the discriminator. We conduct experiments on two benchmarks. The results show that our framework can smoothly synthesize pedestrians on background images of variations and different levels of details. To quantitatively evaluate our approach, we add the generated samples into training data of the baseline pedestrian detectors and show the synthetic images are able to improve the detectors' performance.", "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.", "", "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems.", "" ] }
1812.08352
2904407897
In this paper, we introduce a new task - interactive image editing via conversational language, where users can guide an agent to edit images via multi-turn natural language dialogue. In each dialogue turn, the agent takes a source image and a natural language description from the user as the input and generates a new image following the textual description. Two new datasets are introduced for this task, Zap-Seq, and DeepFashion-Seq. We propose a novel Sequential Attention Generative Adversarial Network (SeqAttnGAN) framework, which applies a neural state tracker to encode both the source image and the textual description in each dialogue turn and generates high-quality new image consistent with both the preceding images and the dialogue context. To achieve better region-specific text-to-image generation, we also introduce an attention mechanism into the model. Experiments on the two new datasets show that the proposed SeqAttnGAN model outperforms state-of-the-art (SOTA) approaches on the dialogue-based image editing task. Detailed quantitative evaluation and user study also demonstrate that our model is more effective than SOTA baselines on image generation, in terms of both visual quality and text-to-image consistency.
AttnGAN @cite_6 proposed by Xu embedded attention mechanism into the generator to focus on fine-grained word level information. Chen @cite_24 presented a framework targeting image segmentation and colorization with a recurrent attentive model. The FashionGAN work @cite_4 generated new clothing on a person based on textual descriptions. The TAGAN (text-adaptive generative adversarial network) @cite_23 proposed a method for manipulating images with natural language description. While these paradigms are effective, the restrictions on specific user inputs (either pre-defined attributes or single-turn interaction) limit their impact.
{ "cite_N": [ "@cite_24", "@cite_23", "@cite_4", "@cite_6" ], "mid": [ "2770776392", "2950404765", "2757508077", "2771088323" ], "abstract": [ "We investigate the problem of Language-Based Image Editing (LBIE) in this work. Given a source image and a natural language description, we want to generate a target image by editing the source im- age based on the description. We propose a generic modeling framework for two sub-tasks of LBIE: language-based image segmentation and image colorization. The framework uses recurrent attentive models to fuse image and language features. Instead of using a fixed step size, we introduce for each re- gion of the image a termination gate to dynamically determine in each inference step whether to continue extrapolating additional information from the textual description. The effectiveness of the framework has been validated on three datasets. First, we introduce a synthetic dataset, called CoSaL, to evaluate the end-to-end performance of our LBIE system. Second, we show that the framework leads to state-of-the- art performance on image segmentation on the ReferIt dataset. Third, we present the first language-based colorization result on the Oxford-102 Flowers dataset, laying the foundation for future research.", "This paper addresses the problem of manipulating images using natural language description. Our task aims to semantically modify visual attributes of an object in an image according to the text describing the new visual appearance. Although existing methods synthesize images having new attributes, they do not fully preserve text-irrelevant contents of the original image. In this paper, we propose the text-adaptive generative adversarial network (TAGAN) to generate semantically manipulated images while preserving text-irrelevant contents. The key to our method is the text-adaptive discriminator that creates word-level local discriminators according to input text to classify fine-grained attributes independently. With this discriminator, the generator learns to generate images where only regions that correspond to the given text are modified. Experimental results show that our method outperforms existing methods on CUB and Oxford-102 datasets, and our results were mostly preferred on a user study. Extensive analysis shows that our method is able to effectively disentangle visual attributes and produce pleasing outputs.", "We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model \"redresses\" the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer's body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer's pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted. The codes and the data are available at this http URL edu.hk projects FashionGAN .", "In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14 on the CUB dataset and 170.25 on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image." ] }
1812.08352
2904407897
In this paper, we introduce a new task - interactive image editing via conversational language, where users can guide an agent to edit images via multi-turn natural language dialogue. In each dialogue turn, the agent takes a source image and a natural language description from the user as the input and generates a new image following the textual description. Two new datasets are introduced for this task, Zap-Seq, and DeepFashion-Seq. We propose a novel Sequential Attention Generative Adversarial Network (SeqAttnGAN) framework, which applies a neural state tracker to encode both the source image and the textual description in each dialogue turn and generates high-quality new image consistent with both the preceding images and the dialogue context. To achieve better region-specific text-to-image generation, we also introduce an attention mechanism into the model. Experiments on the two new datasets show that the proposed SeqAttnGAN model outperforms state-of-the-art (SOTA) approaches on the dialogue-based image editing task. Detailed quantitative evaluation and user study also demonstrate that our model is more effective than SOTA baselines on image generation, in terms of both visual quality and text-to-image consistency.
AI tasks that lie in the intersection between computer vision and natural language processing have drawn much attention in the research community, benefiting from the latest deep learning techniques and GANs. Such tasks include visual question-answering @cite_40 , visual-semantic embeddings @cite_17 , grounding phrases in image regions @cite_14 , and image-grounded conversation @cite_33 .
{ "cite_N": [ "@cite_40", "@cite_14", "@cite_33", "@cite_17" ], "mid": [ "2950761309", "2247513039", "2963904606", "2287889828" ], "abstract": [ "We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).", "Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual content is a challenging problem with many applications for human-computer interaction and image-text reference resolution. Few datasets provide the ground truth spatial localization of phrases, thus it is desirable to learn from data with no or little grounding supervision. We propose a novel approach which learns grounding by reconstructing a given phrase using an attention mechanism, which can be either latent or optimized directly. During training our approach encodes the phrase using a recurrent network language model and then learns to attend to the relevant image region in order to reconstruct the input phrase. At test time, the correct attention, i.e., the grounding, is evaluated. If grounding supervision is available it can be directly applied via a loss over the attention mechanism. We demonstrate the effectiveness of our approach on the Flickr30k Entities and ReferItGame datasets with different levels of supervision, ranging from no supervision over partial supervision to full supervision. Our supervised variant improves by a large margin over the state-of-the-art on both datasets.", "", "This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset." ] }
1812.08352
2904407897
In this paper, we introduce a new task - interactive image editing via conversational language, where users can guide an agent to edit images via multi-turn natural language dialogue. In each dialogue turn, the agent takes a source image and a natural language description from the user as the input and generates a new image following the textual description. Two new datasets are introduced for this task, Zap-Seq, and DeepFashion-Seq. We propose a novel Sequential Attention Generative Adversarial Network (SeqAttnGAN) framework, which applies a neural state tracker to encode both the source image and the textual description in each dialogue turn and generates high-quality new image consistent with both the preceding images and the dialogue context. To achieve better region-specific text-to-image generation, we also introduce an attention mechanism into the model. Experiments on the two new datasets show that the proposed SeqAttnGAN model outperforms state-of-the-art (SOTA) approaches on the dialogue-based image editing task. Detailed quantitative evaluation and user study also demonstrate that our model is more effective than SOTA baselines on image generation, in terms of both visual quality and text-to-image consistency.
Most approaches have focused on end-to-end neural models based on the encoder-decoder architectures and sequence-to-sequence learning @cite_36 @cite_1 @cite_12 @cite_13 . Das @cite_16 proposed the task of visual dialogue, where the agent can answer questions about images in an interactive dialogue. De Vries @cite_15 introduced the GuessWhat?! game, where a series of questions is asked to pinpoint a specific object in an image, with yes no NA answers. However, these dialogue settings are purely text-based, where visual feature only plays a supportive role. DeVault @cite_34 investigated building dialogue systems that can help users efficiently explore data through visualizations. Guo @cite_35 introduced an agent presenting candidate images to the user and retrieving new images based on user's feedback. Another piece of related work is @cite_25 for interactive image generation by encoding history information. Different from them, text information is used to guide the image generation editing in our work.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_1", "@cite_34", "@cite_15", "@cite_16", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2798503981", "2949868354", "889023230", "2774397635", "2558809543", "", "2953119472", "2808162831", "2403702038" ], "abstract": [ "Existing methods for interactive image retrieval have demonstrated the merit of integrating user feedback, improving retrieval results. However, most current systems rely on restricted forms of user feedback, such as binary relevance responses, or feedback based on a fixed set of relative attributes, which limits their impact. In this paper, we introduce a new approach to interactive image search that enables users to provide feedback via natural language, allowing for more natural and effective interaction. We formulate the task of dialog-based interactive image retrieval as a reinforcement learning problem, and reward the dialog system for improving the rank of the target image during each dialog turn. To avoid the cumbersome and costly process of collecting human-machine conversations as the dialog system learns, we train our system with a user simulator, which is itself trained to describe the differences between target and candidate images. The efficacy of our approach is demonstrated in a footwear retrieval application. Extensive experiments on both simulated and real-world data show that 1) our proposed learning framework achieves better accuracy than other supervised and reinforcement learning baselines and 2) user feedback based on natural language rather than pre-specified attributes leads to more effective retrieval results, and a more natural and expressive communication interface.", "The present paper surveys neural approaches to conversational AI that have been developed in the last few years. We group conversational systems into three categories: (1) question answering agents, (2) task-oriented dialogue agents, and (3) chatbots. For each category, we present a review of state-of-the-art neural approaches, draw the connection between them and traditional approaches, and discuss the progress that has been made and challenges still being faced, using specific systems and models as case studies.", "We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.", "", "We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.", "", "We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end -- from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a 'sanity check' demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask answer about certain visual attributes (shape color style). Thus, we demonstrate the emergence of grounded language and communication among 'visual' dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset, where we pretrain with supervised dialog data and show that the RL 'fine-tuned' agents significantly outperform SL agents. Interestingly, the RL Qbot learns to ask questions that Abot is good at, ultimately resulting in more informative dialog and a better team.", "In this work we combine two research threads from Vision Graphics and Natural Language Processing to formulate an image generation task conditioned on attributes in a multi-turn setting. By multiturn, we mean the image is generated in a series of steps of user-specified conditioning information. Our proposed approach is practically useful and offers insights into neural interpretability. We introduce a framework that includes a novel training algorithm as well as model improvements built for the multi-turn setting. We demonstrate that this framework generates a sequence of images that match the given conditioning information and that this task is useful for more detailed benchmarking and analysis of conditional image generation methods.", "Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (, 2014a). We show similar result patterns on data extracted from an online concierge service." ] }
1812.08125
2904944335
Time-of-Flight (ToF) cameras require active illumination to obtain depth information thus the power of illumination directly affects the performance of ToF cameras. Traditional ToF imaging algorithms is very sensitive to illumination and the depth accuracy degenerates rapidly with the power of it. Therefore, the design of a power efficient ToF camera always creates a painful dilemma for the illumination and the performance trade-off. In this paper, we show that despite the weak signals in many areas under extreme short exposure setting, these signals as a whole can be well utilized through a learning process which directly translates the weak and noisy ToF camera raw to depth map. This creates an opportunity to tackle the aforementioned dilemma and make a very power efficient ToF camera possible. To enable the learning, we collect a comprehensive dataset under a variety of scenes and photographic conditions by a specialized ToF camera. Experiments show that our method is able to robustly process ToF camera raw with the exposure time of one order of magnitude shorter than that used in conventional ToF cameras. In addition to evaluating our approach both quantitatively and qualitatively, we also discuss its implication to designing the next generation power efficient ToF cameras. We will make our dataset and code publicly available.
Depth reconstruction based on ToF cameras. ToF cameras face a lot of challenging problems when extracting depth from raw phase-shifted measurements with respect to emitted modulated infrared signal. @cite_33 established a two-component, dual-frequency approach to resolving phase ambiguity, achieving significant improvements of the accuracy when distortion is caused by multipath interference (MPI). Several methods were proposed to deal with MPI distortions, including adding or modifying hardware @cite_7 @cite_28 @cite_15 , employing multiple modulation frequencies @cite_33 @cite_10 @cite_13 @cite_1 and estimating light transport through an approximation of depth @cite_4 @cite_12 . @cite_6 correct MPI errors by a two-stage training strategy, training the encoder to represent MPI-corrupted depth images with captured dataset firstly and then use synthetic scenes to train the decoder to correct the depth. However, the above pipelines are based on the assumption that there is no cumulative error and information loss introduced in the previous stage, thus the final result of these methods is likely to contain cumulative errors of multiple stages.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_33", "@cite_7", "@cite_28", "@cite_1", "@cite_6", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "1527354995", "", "2129804534", "2031980170", "2062090038", "", "", "1979306294", "2949752378", "134016981" ], "abstract": [ "Time-of-flight (ToF) cameras calculate depth maps by reconstructing phase shifts of amplitude-modulated signals. For broad illumination of transparent objects, reflections from multiple scene points can illuminate a given pixel, giving rise to an erroneous depth map. We report here a sparsity-regularized solution that separates K interfering components using multiple modulation frequency measurements. The method maps ToF imaging to the general framework of spectral estimation theory and has applications in improving depth profiles and exploiting multiple scattering.", "", "Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.", "We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor's spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the camera's and the world's space-time coordinate systems to take into account effects associated with the finite speed of light. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique may motivate new forms of computational photography.", "Transient imaging is an exciting a new imaging modality that can be used to understand light propagation in complex environments, and to capture and analyze scene properties such as the shape of hidden objects or the reflectance properties of surfaces. Unfortunately, research in transient imaging has so far been hindered by the high cost of the required instrumentation, as well as the fragility and difficulty to operate and calibrate devices such as femtosecond lasers and streak cameras. In this paper, we explore the use of photonic mixer devices (PMD), commonly used in inexpensive time-of-flight cameras, as alternative instrumentation for transient imaging. We obtain a sequence of differently modulated images with a PMD sensor, impose a model for local light object interaction, and use an optimization procedure to infer transient images given the measurements and model. The resulting method produces transient images at a cost several orders of magnitude below existing methods, while simultaneously simplifying and speeding up the capture process.", "", "", "Transient images help to analyze light transport in scenes. Besides two spatial dimensions, they are resolved in time of flight. Cost-efficient approaches for their capture use amplitude modulated continuous wave lidar systems but typically take more than a minute of capture time. We propose new techniques for measurement and reconstruction of transient images, which drastically reduce this capture time. To this end, we pose the problem of reconstruction as a trigonometric moment problem. A vast body of mathematical literature provides powerful solutions to such problems. In particular, the maximum entropy spectral estimate and the Pisarenko estimate provide two closed-form solutions for reconstruction using continuous densities or sparse distributions, respectively. Both methods can separate m distinct returns using measurements at m modulation frequencies. For m = 3 our experiments with measured data confirm this. Our GPU-accelerated implementation can reconstruct more than 100000 frames of a transient image per second. Additionally, we propose modifications of the capture routine to achieve the required sinusoidal modulation without increasing the capture time. This allows us to capture up to 18.6 transient images per second, leading to transient video. An important byproduct is a method for removal of multipath interference in range imaging.", "A major issue with Time of Flight sensors is the presence of multipath interference. We present Sparse Reflections Analysis (SRA), an algorithm for removing this interference which has two main advantages. First, it allows for very general forms of multipath, including interference with three or more paths, diffuse multipath resulting from Lambertian surfaces, and combinations thereof. SRA removes this general multipath with robust techniques based on @math optimization. Second, due to a novel dimension reduction, we are able to produce a very fast version of SRA, which is able to run at frame rate. Experimental results on both synthetic data with ground truth, as well as real images of challenging scenes, validate the approach.", "Multipath is a prominent phenomenon in Time-of-Flight camera images and distorts the measurements by several centimetres. It troubles applications that demand for high accuracy, such as robotic manipulation or mapping. This paper addresses the photometric processes that cause multipath interference. It formulates an improved multipath model and designs a compensation process in order to correct the multipath-related errors. A calibration of the ToF illumination supports the process. The proposed approach, moreover, allows to include an environment model. The positive impact of this process is demonstrated." ] }
1812.08125
2904944335
Time-of-Flight (ToF) cameras require active illumination to obtain depth information thus the power of illumination directly affects the performance of ToF cameras. Traditional ToF imaging algorithms is very sensitive to illumination and the depth accuracy degenerates rapidly with the power of it. Therefore, the design of a power efficient ToF camera always creates a painful dilemma for the illumination and the performance trade-off. In this paper, we show that despite the weak signals in many areas under extreme short exposure setting, these signals as a whole can be well utilized through a learning process which directly translates the weak and noisy ToF camera raw to depth map. This creates an opportunity to tackle the aforementioned dilemma and make a very power efficient ToF camera possible. To enable the learning, we collect a comprehensive dataset under a variety of scenes and photographic conditions by a specialized ToF camera. Experiments show that our method is able to robustly process ToF camera raw with the exposure time of one order of magnitude shorter than that used in conventional ToF cameras. In addition to evaluating our approach both quantitatively and qualitatively, we also discuss its implication to designing the next generation power efficient ToF cameras. We will make our dataset and code publicly available.
Image enhancement under low light. For conventional RGB cameras, photography in low light is challenging. Several techniques have been proposed to increase the SNR of the recovered image @cite_17 @cite_9 @cite_3 @cite_14 @cite_22 . Chen at el. @cite_21 established a pipeline by training a fully convolutional neural network which directly translate the very noise and dark Bayer pattern camera raw to high quality color images. Though impressive results from the aforementioned studies, deep learning and data-driven approaches have not yet been adopted to recover high quality depth information from weak and noisy ToF raw. It remains unclear if such methodology is effective for ToF imaging. The aim of this paper is to disclose its feasibility.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_9", "@cite_21", "@cite_3", "@cite_17" ], "mid": [ "2112515787", "", "2764975447", "2951408185", "2147800946", "2566376500" ], "abstract": [ "A general methodology for noise reduction and contrast enhancement in very noisy image data with low dynamic range is presented. Video footage recorded in very dim light is especially targeted. Smoothing kernels that automatically adapt to the local spatio-temporal intensity structure in the image sequences are constructed in order to preserve and enhance fine spatial detail and prevent motion blur. In color image data, the chromaticity is restored and demosaicing of raw RGB input data is performed simultaneously with the noise reduction. The method is very general, contains few user-defined parameters and has been developed for efficient parallel computation using a GPU. The technique has been applied to image sequences with various degrees of darkness and noise levels, and results from some of these tests, and comparisons to other methods, are presented. The present work has been inspired by research on vision in nocturnal animals, particularly the spatial and temporal visual summation that allows these animals to see in dim light.", "", "This paper presents a low-light image enhancement method using the variational-optimization-based Retinex algorithm. The proposed enhancement method first estimates the initial illumination and uses its gamma corrected version to constrain the illumination component. Next, the variational-based minimization is iteratively performed to separate the reflectance and illumination components. The color assignment of the estimated reflectance component is then performed to restore the color component using the input RGB color channels. Experimental results show that the proposed method can provide better enhanced result without saturation, noise amplification or color distortion.", "Imaging in low light is challenging due to low photon count and low SNR. Short-exposure images suffer from noise, while long exposure can induce blur and is often impractical. A variety of denoising, deblurring, and enhancement techniques have been proposed, but their effectiveness is limited in extreme conditions, such as video-rate imaging at night. To support the development of learning-based pipelines for low-light image processing, we introduce a dataset of raw short-exposure low-light images, with corresponding long-exposure reference images. Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. We report promising results on the new dataset, analyze factors that affect performance, and highlight opportunities for future work. The results are shown in the supplementary video at this https URL", "The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.", "When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency." ] }
1812.08249
2903727631
State-of-the-art methods for video action recognition commonly use an ensemble of two networks: the spatial stream, which takes RGB frames as input, and the temporal stream, which takes optical flow as input. In recent work, both of these streams consist of 3D Convolutional Neural Networks, which apply spatiotemporal filters to the video clip before performing classification. Conceptually, the temporal filters should allow the spatial stream to learn motion representations, making the temporal stream redundant. However, we still see significant benefits in action recognition performance by including an entirely separate temporal stream, indicating that the spatial stream is "missing" some of the signal captured by the temporal stream. In this work, we first investigate whether motion representations are indeed missing in the spatial stream of 3D CNNs. Second, we demonstrate that these motion representations can be improved by distillation, by tuning the spatial stream to predict the outputs of the temporal stream, effectively combining both models into a single stream. Finally, we show that our Distilled 3D Network (D3D) achieves performance on par with two-stream approaches, using only a single model and with no need to compute optical flow.
Many approaches leverage the strength of single-image (2D) CNNs by applying a CNN to each individual video frame and pooling the predictions across time @cite_34 @cite_17 @cite_13 . However, na " ve average pooling ignores the temporal dynamics of video. To capture temporal features, Two-Stream Networks introduce a second network called the temporal stream, which takes a sequence of consecutive optical flow frames as input @cite_34 . The outputs of these networks are then combined by averaging or a linear SVM. Other methods have taken different approaches to incorporating motion by changing the way the features are pooled across time, for example, with an LSTM or CRF @cite_17 @cite_13 . These approaches have proven very effective, particularly in the case where video data is limited and therefore training a 3D CNN is challenging. However, recent advances have enabled 3D CNN approaches, which require large video datasets to train, to be effective.
{ "cite_N": [ "@cite_34", "@cite_13", "@cite_17" ], "mid": [ "2156303437", "2583815496", "2951183276" ], "abstract": [ "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "Actions are more than just movements and trajectories: we cook to eat and we hold a cup to drink from it. A thorough understanding of videos requires going beyond appearance modeling and necessitates reasoning about the sequence of activities, as well as the higher-level constructs such as intentions. But how do we model and reason about these? We propose a fully-connected temporal CRF model for reasoning over various aspects of activities that includes objects, actions, and intentions, where the potentials are predicted by a deep network. End-to-end training of such structured models is a challenging endeavor: For inference and learning we need to construct mini-batches consisting of whole videos, leading to mini-batches with only a few videos. This causes high-correlation between data points leading to breakdown of the backprop algorithm. To address this challenge, we present an asynchronous variational inference method that allows efficient end-to-end training. Our method achieves a classification mAP of 22.4 on the Charades [42] benchmark, outperforming the state-of-the-art (17.2 mAP), and offers equal gains on the task of temporal localization.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized." ] }
1812.08249
2903727631
State-of-the-art methods for video action recognition commonly use an ensemble of two networks: the spatial stream, which takes RGB frames as input, and the temporal stream, which takes optical flow as input. In recent work, both of these streams consist of 3D Convolutional Neural Networks, which apply spatiotemporal filters to the video clip before performing classification. Conceptually, the temporal filters should allow the spatial stream to learn motion representations, making the temporal stream redundant. However, we still see significant benefits in action recognition performance by including an entirely separate temporal stream, indicating that the spatial stream is "missing" some of the signal captured by the temporal stream. In this work, we first investigate whether motion representations are indeed missing in the spatial stream of 3D CNNs. Second, we demonstrate that these motion representations can be improved by distillation, by tuning the spatial stream to predict the outputs of the temporal stream, effectively combining both models into a single stream. Finally, we show that our Distilled 3D Network (D3D) achieves performance on par with two-stream approaches, using only a single model and with no need to compute optical flow.
Many approaches have been proposed to incorporate motion features into 3D CNNs without the use of optical flow inputs. Motion Feature Networks, Optical Flow-Guided Features, and Representation Flow all accomplish this by introducing modules into the network architecture which explicitly compute motion representations @cite_31 @cite_7 @cite_0 . Alternatively, several approaches have proposed to replace the optical flow inputs for the temporal stream with a CNN which produces optical flow. For example, Hidden Two-Stream and TVNet use a motion representation that is trained end-to-end for action recognition @cite_9 @cite_18 . However, these methods, as well many other methods that propose to use CNNs to predict optical flow, do not use vanilla'' 3D CNNs. Instead, they use specialized layers, such as correlations or cost volumes, so they do not answer whether vanilla 3D CNNs can learn motion representations @cite_4 @cite_11 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_9", "@cite_0", "@cite_31", "@cite_11" ], "mid": [ "2604128149", "", "2964094092", "2963216700", "2894724248", "2884797191", "" ], "abstract": [ "Analyzing videos of human actions involves understanding the temporal relationships among video frames. State-of-the-art action recognition approaches rely on traditional optical flow estimation methods to pre-compute motion information for CNNs. Such a two-stage approach is computationally expensive, storage demanding, and not end-to-end trainable. In this paper, we present a novel CNN architecture that implicitly captures motion information between adjacent frames. We name our approach hidden two-stream CNNs because it only takes raw video frames as input and directly predicts action classes without explicitly computing optical flow. Our end-to-end approach is 10x faster than its two-stage baseline. Experimental results on four challenging action recognition datasets: UCF101, HMDB51, THUMOS14 and ActivityNet v1.2 show that our approach significantly outperforms the previous best real-time approaches.", "", "Motion representation plays a vital role in human action recognition in videos. In this study, we introduce a novel compact motion representation for video action recognition, named Optical Flow guided Feature (OFF), which enables the network to distill temporal information through a fast and robust approach. The OFF is derived from the definition of optical flow and is orthogonal to the optical flow. The derivation also provides theoretical support for using the difference between two frames. By directly calculating pixel-wise spatio-temporal gradients of the deep feature maps, the OFF could be embedded in any existing CNN based video action recognition framework with only a slight additional cost. It enables the CNN to extract spatiotemporal information, especially the temporal information between frames simultaneously. This simple but powerful idea is validated by experimental results. The network with OFF fed only by RGB inputs achieves a competitive accuracy of 93.3 on UCF-101, which is comparable with the result obtained by two streams (RGB and optical flow), but is 15 times faster in speed. Experimental results also show that OFF is complementary to other motion modalities such as optical flow. When the proposed method is plugged into the state-of-the-art video action recognition framework, it has 96.0 and 74.2 accuracy on UCF-101 and HMDB-51 respectively. The code for this project is available at: https: github.com kevin-ssy Optical-Flow-Guided-Feature", "Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterations as neural layers. TVNet can therefore be used directly without any extra learning. Moreover, it can be naturally concatenated with other task-specific networks to formulate an end-to-end architecture, thus making our method more efficient than current multi-stage approaches by avoiding the need to pre-compute and store features on disk. Finally, the parameters of the TVNet can be further fine-tuned by end-to-end training. This enables TVNet to learn richer and task-specific patterns beyond exact optical flow. Extensive experiments on two action recognition benchmarks verify the effectiveness of the proposed approach. Our TVNet achieves better accuracies than all compared methods, while being competitive with the fastest counterpart in terms of features extraction time.", "In this paper, we propose a convolutional layer inspired by optical flow algorithms to learn motion representations. Our representation flow layer is a fully-differentiable layer designed to optimally capture the flow' of any representation channel within a convolutional neural network. Its parameters for iterative flow optimization are learned in an end-to-end fashion together with the other model parameters, maximizing the action recognition performance. Furthermore, we newly introduce the concept of learning flow of flow' representations by stacking multiple representation flow layers. We conducted extensive experimental evaluations, confirming its advantages over previous recognition models using traditional optical flows in both computational speed and performance.", "Spatio-temporal representations in frame sequences play an important role in the task of action recognition. Previously, a method of using optical flow as a temporal information in combination with a set of RGB images that contain spatial information has shown great performance enhancement in the action recognition tasks. However, it has an expensive computational cost and requires two-stream (RGB and optical flow) framework. In this paper, we propose MFNet (Motion Feature Network) containing motion blocks which make it possible to encode spatio-temporal information between adjacent frames in a unified network that can be trained end-to-end. The motion block can be attached to any existing CNN-based action recognition frameworks with only a small additional cost. We evaluated our network on two of the action recognition datasets (Jester and Something-Something) and achieved competitive performances for both datasets by training the networks from scratch.", "" ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
Gradient-based visualization @cite_19 @cite_45 @cite_37 estimates the input image that maximizes the activation score of a neural unit. Dosovitskiy @cite_55 proposed up-convolutional nets to invert feature maps of conv-layers to images. Unlike gradient-based methods, up-convolutional nets cannot mathematically ensure the visualization result reflects actual neural representations. In recent years, @cite_35 provided a reliable tool to visualize filters in different conv-layers of a CNN.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_55", "@cite_19", "@cite_45" ], "mid": [ "", "2962851944", "2273348943", "2952186574", "2949987032" ], "abstract": [ "", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance." ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
Zhou @cite_7 proposed a method to accurately compute the image-resolution receptive field of neural activations in a feature map. Theoretically, the actual receptive field of a neural activation is smaller than that computed using the filter size. The accurate estimation of the receptive field is crucial to understand a filter's representations.
{ "cite_N": [ "@cite_7" ], "mid": [ "1899185266" ], "abstract": [ "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects." ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
@cite_51 explored semantic meanings of convolutional filters. @cite_3 evaluated the transferability of filters in intermediate conv-layers. @cite_42 @cite_67 computed feature distributions of different categories in the CNN feature space. Methods of @cite_47 @cite_38 propagated gradients of feature maps the CNN loss back to the image, in order to estimate the image regions that directly contribute the network output. @cite_22 proposed a LIME model to extract image regions that are used by a CNN to predict a label (or an attribute).
{ "cite_N": [ "@cite_38", "@cite_67", "@cite_22", "@cite_42", "@cite_3", "@cite_47", "@cite_51" ], "mid": [ "2616247523", "1661149683", "2282821441", "2411252390", "2949667497", "2962981568", "1673923490" ], "abstract": [ "We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL A demo and a video of the demo can be found at this http URL and youtu.be COjUB9Izk6E.", "We introduce an approach for analyzing the variation of features generated by convolutional neural networks (CNNs) trained on large image datasets with respect to scene factors that occur in natural images. Such factors may include object style, 3D viewpoint, color, and scene lighting configuration. Our approach analyzes CNN feature responses with respect to different scene factors by controlling for them via rendering using a large database of 3D CAD models. The rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors. We perform a linear decomposition of the responses based on knowledge of the input scene factors and analyze the resulting components. In particular, we quantify their relative importance in the CNN responses and visualize them using principal component analysis. We show qualitative and quantitative results of our study on three trained CNNs: AlexNet [18], Places [43], and Oxford VGG [8]. We observe important differences across the different networks and CNN layers with respect to different scene factors and object categories. Finally, we demonstrate that our analysis based on computer-generated imagery translates to the network representation of natural images.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "The outputs of a trained neural network contain much richer information than just an one-hot classifier. For example, a neural network might give an image of a dog the probability of one in a million of being a cat but it is still much larger than the probability of being a car. To reveal the hidden structure in them, we apply two unsupervised learning algorithms, PCA and ICA, to the outputs of a deep Convolutional Neural Network trained on the ImageNet of 1000 classes. The PCA ICA embedding of the object classes reveals their visual similarity and the PCA ICA components can be interpreted as common visual features shared by similar object classes. For an application, we proposed a new zero-shot learning method, in which the visual features learned by PCA ICA are employed. Our zero-shot learning method achieves the state-of-the-art results on the ImageNet of over 20000 classes.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks “look” in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we specialise the framework to find the part of an image most responsible for a classifier decision. Unlike previous works, our method is model-agnostic and testable because it is grounded in explicit and interpretable image perturbations.", "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input." ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
Network-attack methods @cite_21 @cite_52 @cite_51 diagnosed network representations by computing adversarial samples for a CNN. In particular, influence functions @cite_52 were proposed to compute adversarial samples, provide plausible ways to create training samples to attack the learning of CNNs, fix the training set, and further debug representations of a CNN. @cite_10 discovered knowledge blind spots (unknown patterns) of a pre-trained CNN in a weakly-supervised manner.
{ "cite_N": [ "@cite_10", "@cite_21", "@cite_51", "@cite_52" ], "mid": [ "2583689529", "2964006983", "1673923490", "" ], "abstract": [ "Predictive models deployed in the real world may assign incorrect labels to instances with high confidence. Such errors or unknown unknowns are rooted in model incompleteness, and typically arise because of the mismatch between training data and the cases encountered at test time. As the models are blind to such errors, input from an oracle is needed to identify these failures. In this paper, we formulate and address the problem of informed discovery of unknown unknowns of any given predictive model where unknown unknowns occur due to systematic biases in the training data. We propose a model-agnostic methodology which uses feedback from an oracle to both identify unknown unknowns and to intelligently guide the discovery. We employ a two-phase approach which first organizes the data into multiple partitions based on the feature similarity of instances and the confidence scores assigned by the predictive model, and then utilizes an explore-exploit strategy for discovering unknown unknowns across these partitions. We demonstrate the efficacy of our framework by varying the underlying causes of unknown unknowns across various applications. To the best of our knowledge, this paper presents the first algorithmic approach to the problem of discovering unknown unknowns of predictive models.", "Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution(DE). It requires less adversarial information(a black-box attack) and can fool more types of networks due to the inherent features of DE. The results show that 68.36 of the natural images in CIFAR-10 test dataset and 41.22 of the ImageNet (ILSVRC 2012) validation images can be perturbed to at least one target class by modifying just one pixel with 73.22 and 5.52 confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. Besides, we also illustrate an important application of DE (or broadly speaking, evolutionary computation) in the domain of adversarial machine learning: creating tools that can effectively generate low-cost adversarial attacks against neural networks for evaluating robustness.", "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.", "" ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
Zhang @cite_43 developed a method to examine representations of conv-layers and automatically discover potential, biased representations of a CNN due to the dataset bias. Furthermore, @cite_59 @cite_6 @cite_50 mined the local, bottom-up, and top-down information components in a model for prediction.
{ "cite_N": [ "@cite_43", "@cite_50", "@cite_6", "@cite_59" ], "mid": [ "2765787895", "", "2548995987", "2137454801" ], "abstract": [ "Given a pre-trained CNN without any testing samples, this paper proposes a simple yet effective method to diagnose feature representations of the CNN. We aim to discover representation flaws caused by potential dataset bias. More specifically, when the CNN is trained to estimate image attributes, we mine latent relationships between representations of different attributes inside the CNN. Then, we compare the mined attribute relationships with ground-truth attribute relationships to discover the CNN's blind spots and failure modes due to dataset bias. In fact, representation flaws caused by dataset bias cannot be examined by conventional evaluation strategies based on testing images, because testing images may also have a similar bias. Experiments have demonstrated the effectiveness of our method.", "", "This paper presents a method to quantitatively evaluate information contributions of individual bottom-up and top-down computing processes in object recognition. Our objective is to start a discovery on how to schedule bottom-up and top-down processes. (1) We identify two bottom-up processes and one top-down process in hierarchical models, termed α, β and γ channels respectively ; (2) We formulate the three channels under an unified Bayesian framework; (3) We use a blocking control strategy to isolate the three channels to separately train them and individually measure their information contributions in typical recognition tasks; (4) Based on the evaluated results, we integrate the three channels to detect objects with performance improvements obtained. Our experiments are performed in both low-middle level tasks, such as detecting edges bars and junctions, and high level tasks, such as detecting human faces and cars, together with a group of human study designed to compare computer and human perception.", "In this paper, we present a compositional boosting algorithm for detecting and recognizing 17 common image structures in low-middle level vision tasks. These structures, called \"graphlets\", are the most frequently occurring primitives, junctions and composite junctions in natural images, and are arranged in a 3-layer And-Or graph representation. In this hierarchic model, larger graphlets are decomposed (in And-nodes) into smaller graphlets in multiple alternative ways (at Or-nodes), and parts are shared and re-used between graphlets. Then we present a compositional boosting algorithm for computing the 17 graphlets categories collectively in the Bayesian framework. The algorithm runs recursively for each node A in the And-Or graph and iterates between two steps -bottom-up proposal and top-down validation. The bottom-up step includes two types of boosting methods, (i) Detecting instances of A (often in low resolutions) using Adaboosting method through a sequence of tests (weak classifiers) image feature, (ii) Proposing instances of A (often in high resolution) by binding existing children nodes of A through a sequence of compatibility tests on their attributes (e.g angles, relative size etc). The Adaboosting and binding methods generate a number of candidates for node A which are verified by a top-down process in a way similar to Data-Driven Markov Chain Monte Carlo [18]. Both the Adaboosting and binding methods are trained off-line for each graphlet category, and the compositional nature of the model means the algorithm is recursive and can be learned from a small training set. We apply this algorithm to a wide range of indoor and outdoor images with satisfactory results." ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
Hu @cite_58 designed logic rules for network outputs, and used these rules to regularize neural networks and learn meaningful representations. However, this study has not obtained semantic representations in intermediate layers. Some studies extracted neural units with certain semantics from CNNs for different applications. Given feature maps of conv-layers, Zhou @cite_7 @cite_68 extracted scene semantics. Simon mined objects from feature maps of conv-layers @cite_40 , and learned explicit object parts @cite_17 .
{ "cite_N": [ "@cite_7", "@cite_58", "@cite_40", "@cite_68", "@cite_17" ], "mid": [ "1899185266", "2963687836", "2949820118", "2950328304", "2949194058" ], "abstract": [ "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.", "Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. We deploy the framework on a CNN for sentiment analysis, and an RNN for named entity recognition. With a few highly intuitive rules, we obtain substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems.", "Part models of object categories are essential for challenging recognition tasks, where differences in categories are subtle and only reflected in appearances of small parts of the object. We present an approach that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning. The key idea is to find constellations of neural activation patterns computed using convolutional neural networks. In our experiments, we outperform existing approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford PETS, and Oxford Flowers dataset in case no part or bounding box annotations are available and achieve state-of-the-art performance for the Stanford Dog dataset. We also show the benefits of neural constellation models as a data augmentation technique for fine-tuning. Furthermore, our paper unites the areas of generic and fine-grained classification, since our approach is suitable for both scenarios. The source code of our method is available online at this http URL", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them", "Current fine-grained classification approaches often rely on a robust localization of object parts to extract localized feature representations suitable for discrimination. However, part localization is a challenging task due to the large variation of appearance and pose. In this paper, we show how pre-trained convolutional neural networks can be used for robust and efficient object part discovery and localization without the necessity to actually train the network on the current dataset. Our approach called \"part detector discovery\" (PDD) is based on analyzing the gradient maps of the network outputs and finding activation centers spatially related to annotated semantic parts or bounding boxes. This allows us not just to obtain excellent performance on the CUB200-2011 dataset, but in contrast to previous approaches also to perform detection and bird classification jointly without requiring a given bounding box annotation during testing and ground-truth parts during training. The code is available at this http URL and this https URL" ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
| | Many methods have been developed to learn object models in an unsupervised or weakly supervised manner. Methods of @cite_56 @cite_25 @cite_2 @cite_40 learned with image-level annotations without labeling object bounding boxes. @cite_0 @cite_41 did not require any annotations during the learning process. @cite_32 collected training data online from videos to incrementally learn models. @cite_4 @cite_28 discovered objects and identified actions from language Instructions and videos. Inspired by active learning @cite_65 @cite_60 @cite_66 , the idea of learning from question-answering has been used to learn object models @cite_26 @cite_57 @cite_16 . Branson @cite_24 used human-computer interactions to label object parts to learn part models. Instead of directly building new models from active QA, our method uses the QA to mine AOG part representations from CNN representations.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_60", "@cite_41", "@cite_28", "@cite_65", "@cite_32", "@cite_66", "@cite_56", "@cite_0", "@cite_57", "@cite_40", "@cite_24", "@cite_2", "@cite_16", "@cite_25" ], "mid": [ "", "2099501835", "2027953712", "2950926078", "2579317665", "1956202674", "2087044413", "2201054263", "2950931866", "2148349024", "1908985308", "2949820118", "2122686738", "2210113682", "2951614005", "2952072685" ], "abstract": [ "", "We propose a visual event recognition framework for consumer videos by leveraging a large amount of loosely labeled web videos (e.g., from YouTube). Observing that consumer videos generally contain large intraclass variations within the same type of events, we first propose a new method, called Aligned Space-Time Pyramid Matching (ASTPM), to measure the distance between any two video clips. Second, we propose a new transfer learning method, referred to as Adaptive Multiple Kernel Learning (A-MKL), in order to 1) fuse the information from multiple pyramid levels and features (i.e., space-time features and static SIFT features) and 2) cope with the considerable variation in feature distributions between videos from two domains (i.e., web video domain and consumer video domain). For each pyramid level and each type of local features, we first train a set of SVM classifiers based on the combined training set from two domains by using multiple base kernels from different kernel types and parameters, which are then fused with equal weights to obtain a prelearned average classifier. In A-MKL, for each event class we learn an adapted target classifier based on multiple base kernels and the prelearned average classifiers from this event class or all the event classes by minimizing both the structural risk functional and the mismatch between data distributions of two domains. Extensive experiments demonstrate the effectiveness of our proposed framework that requires only a small number of labeled consumer videos by leveraging web data. We also conduct an in-depth investigation on various aspects of the proposed method A-MKL, such as the analysis on the combination coefficients on the prelearned classifiers, the convergence of the learning algorithm, and the performance variation by using different proportions of labeled consumer videos. Moreover, we show that A-MKL using the prelearned classifiers from all the event classes leads to better performance when compared with A-MKL using the prelearned classifiers only from each individual event class.", "Active learning and crowdsourcing are promising ways to efficiently build up training sets for object recognition, but thus far techniques are tested in artificially controlled settings. Typically the vision researcher has already determined the dataset's scope, the labels “actively” obtained are in fact already known, and or the crowd-sourced collection process is iteratively fine-tuned. We present an approach for live learning of object detectors, in which the system autonomously refines its models by actively requesting crowd-sourced annotations on images crawled from the Web. To address the technical issues such a large-scale system entails, we introduce a novel part-based detector amenable to linear classifiers, and show how to identify its most uncertain instances in sub-linear time with a hashing-based solution. We demonstrate the approach with experiments of unprecedented scale and autonomy, and show it successfully improves the state-of-the-art for the most challenging objects in the PASCAL benchmark. In addition, we show our detector competes well with popular nonlinear classifiers that are much more expensive to train.", "This paper addresses unsupervised discovery and localization of dominant objects from a noisy image collection with multiple object classes. The setting of this problem is fully unsupervised, without even image-level annotations or any assumption of a single dominant class. This is far more general than typical colocalization, cosegmentation, or weakly-supervised localization tasks. We tackle the discovery and localization problem using a part-based region matching approach: We use off-the-shelf region proposals to form a set of candidate bounding boxes for objects and object parts. These regions are efficiently matched across images using a probabilistic Hough transform that evaluates the confidence for each candidate correspondence considering both appearance and spatial consistency. Dominant objects are discovered and localized by comparing the scores of candidate regions and selecting those that stand out over other regions containing them. Extensive experimental evaluations on standard benchmarks demonstrate that the proposed approach significantly outperforms the current state of the art in colocalization, and achieves robust object discovery in challenging mixed-class datasets.", "Advances in video technology and data storage have made large scale video data collections of complex activities readily accessible. An increasingly popular approach for automatically inferring the details of a video is to associate the spatio-temporal segments in a video with its natural language descriptions. Most algorithms for connecting natural language with video rely on pre-aligned supervised training data. Recently, several models have been shown to be effective for unsupervised alignment of objects in video with language. However, it remains difficult to generate good spatio-temporal video segments for actions that align well with language. This paper presents a framework that extracts higher level representations of low-level action features through hyperfeature coding from video and aligns them with language. We propose a two-step process that creates a high-level action feature codebook with temporally consistent motions, and then applies an unsupervised alignment algorithm over the action codewords and verbs in the language to identify individual activities. We show an improvement over previous alignment models of objects and nouns on videos of biological experiments, and also evaluate our system on a larger scale collection of videos involving kitchen activities.", "This paper studies active learning in structured probabilistic models such as Conditional Random Fields (CRFs). This is a challenging problem because unlike unstructured prediction problems such as binary or multi-class classification, structured prediction problems involve a distribution with an exponentially-large support, for instance, over the space of all possible segmentations of an image. Thus, the entropy of such models is typically intractable to compute. We propose a crude yet surprisingly effective histogram approximation to the Gibbs distribution, which replaces the exponentially-large support with a coarsened distribution that may be viewed as a histogram over M bins. We show that our approach outperforms a number of baselines and results in a 90 -reduction in the number of annotations needed to achieve nearly the same accuracy as learning from the entire dataset.", "Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.", "Active learning is an effective way to relieve the tedious work of manual annotation in many applications of visual recognition. However, less research attention has been focused on multi-class active learning. In this paper, we propose a novel Gaussian process classifier model with multiple annotators for multi-class visual recognition. Expectation propagation (EP) is adopted for efficient approximate Bayesian inference of our probabilistic model for classification. Based on the EP approximation inference, a generalized Expectation Maximization (GEM) algorithm is derived to estimate both the parameters for instances and the quality of each individual annotator. Also, we incorporate the idea of reinforcement learning to actively select both the informative samples and the high-quality annotators, which better explores the trade-off between exploitation and exploration. The experiments clearly demonstrate the efficacy of the proposed model.", "We present an approach to utilize large amounts of web data for learning CNNs. Specifically inspired by curriculum learning, we present a two-step approach for CNN training. First, we use easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder, more realistic images by leveraging the structure of data and categories. We demonstrate that our two-stage CNN outperforms a fine-tuned CNN trained on ImageNet on Pascal VOC 2012. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style detector. It achieves the best performance on VOC 2007 where no VOC training data is used. Finally, we show our approach is quite robust to noise and performs comparably even when we use image search results from March 2013 (pre-CNN image search era).", "Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101).", "The long-standing goal of localizing every object in an image remains elusive. Manually annotating objects is quite expensive despite crowd engineering innovations. Current state-of-the-art automatic object detectors can accurately detect at most a few objects per image. This paper brings together the latest advancements in object detection and in crowd engineering into a principled framework for accurately and efficiently localizing objects in images. The input to the system is an image to annotate and a set of annotation constraints: desired precision, utility and or human cost of the labeling. The output is a set of object annotations, informed by human feedback and computer vision. Our model seamlessly integrates multiple computer vision models with multiple sources of human input in a Markov Decision Process. We empirically validate the effectiveness of our human-in-the-loop labeling approach on the ILSVRC2014 object detection dataset.", "Part models of object categories are essential for challenging recognition tasks, where differences in categories are subtle and only reflected in appearances of small parts of the object. We present an approach that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning. The key idea is to find constellations of neural activation patterns computed using convolutional neural networks. In our experiments, we outperform existing approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford PETS, and Oxford Flowers dataset in case no part or bounding box annotations are available and achieve state-of-the-art performance for the Stanford Dog dataset. We also show the benefits of neural constellation models as a data augmentation technique for fine-tuning. Furthermore, our paper unites the areas of generic and fine-grained classification, since our approach is suitable for both scenarios. The source code of our method is available online at this http URL", "We propose a framework for large scale learning and annotation of structured models. The system interleaves interactive labeling (where the current model is used to semi-automate the labeling of a new example) and online learning (where a newly labeled example is used to update the current model parameters). This framework is scalable to large datasets and complex image models and is shown to have excellent theoretical and practical properties in terms of train time, optimality guarantees, and bounds on the amount of annotation effort per image. We apply this framework to part-based detection, and introduce a novel algorithm for interactive labeling of deformable part models. The labeling tool updates and displays in real-time the maximum likelihood location of all parts as the user clicks and drags the location of one or more parts. We demonstrate that the system can be used to efficiently and robustly train part and pose detectors on the CUB Birds-200-a challenging dataset of birds in unconstrained pose and environment.", "This paper reformulates the theory of graph mining on the technical basis of graph matching, and extends its scope of applications to computer vision. Given a set of attributed relational graphs (ARGs), we propose to use a hierarchical And-Or Graph (AoG) to model the pattern of maximal-size common subgraphs embedded in the ARGs, and we develop a general method to mine the AoG model from the unlabeled ARGs. This method provides a general solution to the problem of mining hierarchical models from unannotated visual data without exhaustive search of objects. We apply our method to RGB RGB-D images and videos to demonstrate its generality and the wide range of applicability. The code will be available at https: sites.google.com site quanshizhang mining-and-or-graphs.", "We propose a framework for parsing video and text jointly for understanding events and answering user queries. Our framework produces a parse graph that represents the compositional structures of spatial information (objects and scenes), temporal information (actions and events) and causal information (causalities between events and fluents) in the video and text. The knowledge representation of our framework is based on a spatial-temporal-causal And-Or graph (S T C-AOG), which jointly models possible hierarchical compositions of objects, scenes and events as well as their interactions and mutual contexts, and specifies the prior probabilistic distribution of the parse graphs. We present a probabilistic generative model for joint parsing that captures the relations between the input video text, their corresponding parse graphs and the joint parse graph. Based on the probabilistic model, we propose a joint parsing system consisting of three modules: video parsing, text parsing and joint inference. Video parsing and text parsing produce two parse graphs from the input video and text respectively. The joint inference module produces a joint parse graph by performing matching, deduction and revision on the video and text parse graphs. The proposed framework has the following objectives: Firstly, we aim at deep semantic parsing of video and text that goes beyond the traditional bag-of-words approaches; Secondly, we perform parsing and reasoning across the spatial, temporal and causal dimensions based on the joint S T C-AOG representation; Thirdly, we show that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where and why. We empirically evaluated our system based on comparison against ground-truth as well as accuracy of query answering and obtained satisfactory results.", "Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain. In this paper, we propose a new method that achieves this goal with only image-level labels of whether the objects are present or not. Our approach combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation. The latter allows us to leverage efficient quasi-Newton optimization techniques. Our experiments demonstrate that the proposed approach provides a 50 relative improvement in mean average precision over the current state-of-the-art on PASCAL VOC 2007 detection." ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
Transferring hidden patterns in the CNN to other tasks is important for neural networks. Typical research includes end-to-end fine-tuning and transferring CNN representations between different categories @cite_3 @cite_48 or datasets @cite_27 . In contrast, we believe that a good explanation and transparent representation of parts will create a new possibility of transferring part features. As in @cite_31 @cite_9 , the AOG is suitable to represent the semantic hierarchy, which enables semantic-level interactions between human and neural networks.
{ "cite_N": [ "@cite_48", "@cite_9", "@cite_3", "@cite_27", "@cite_31" ], "mid": [ "2284929451", "1999160507", "2949667497", "", "347936517" ], "abstract": [ "Top-down information plays a central role in human perception, but plays relatively little role in many current state-of-the-art deep networks, such as Convolutional Neural Networks (CNNs). This work seeks to explore a path by which top-down information can have a direct impact within current deep networks. We explore this path by learning and using \"generators\" corresponding to the network internal effects of three types of transformation (each a restriction of a general affine transformation): rotation, scaling, and translation. We demonstrate how these learned generators can be used to transfer top-down information to novel settings, as mediated by the \"feature flows\" that the transformations (and the associated generators) correspond to inside the network. Specifically, we explore three aspects: 1) using generators as part of a method for synthesizing transformed images --- given a previously unseen image, produce versions of that image corresponding to one or more specified transformations, 2) \"zero-shot learning\" --- when provided with a feature flow corresponding to the effect of a transformation of unknown amount, leverage learned generators as part of a method by which to perform an accurate categorization of the amount of transformation, even for amounts never observed during training, and 3) (inside-CNN) \"data augmentation\" --- improve the classification performance of an existing network by using the learned generators to directly provide additional training \"inside the CNN\".", "This paper presents a framework for unsupervised learning of a hierarchical reconfigurable image template - the AND-OR Template (AOT) for visual objects. The AOT includes: 1) hierarchical composition as \"AND\" nodes, 2) deformation and articulation of parts as geometric \"OR\" nodes, and 3) multiple ways of composition as structural \"OR\" nodes. The terminal nodes are hybrid image templates (HIT) [17] that are fully generative to the pixels. We show that both the structures and parameters of the AOT model can be learned in an unsupervised way from images using an information projection principle. The learning algorithm consists of two steps: 1) a recursive block pursuit procedure to learn the hierarchical dictionary of primitives, parts, and objects, and 2) a graph compression procedure to minimize model structure for better generalizability. We investigate the factors that influence how well the learning algorithm can identify the underlying AOT. And we propose a number of ways to evaluate the performance of the learned AOTs through both synthesized examples and real-world images. Our model advances the state of the art for object detection by improving the accuracy of template matching.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "", "We present a novel structure learning method,Max Margin AND OR Graph (MM-AOG), for parsing the human body into parts and recovering their poses. Our method represents the human body and its parts by an AND OR graph, which is a multi-level mixture of Markov Random Fields (MRFs). Max margin learning, which is a generalization of the training algorithm for support vector machines (SVMs), is used to learn the parameters of the AND OR graph model discriminatively. There are four advantages from this combination of AND OR graphs and max-margin learning. Firstly, the AND OR graph allows us to handle enormous articulated poses with a compact graphical model. Secondly, max-margin learning has more discriminative power than the traditional maximum likelihood approach. Thirdly, the parameters of the AND OR graph model are optimized globally. In particular, the weights of the appearancemodel for individual nodes and the relative importance of spatial relationships between nodes are learnt simultaneously. Finally, the kernel trick can be used to handle high dimensional features and to enable complex similarity measure of shapes. We perform comparison experiments on the baseball datasets, showing significant improvements over state of the art methods." ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
'' in un- weakly-supervised learning: | | Generally speaking, in the scenario of un- weakly-supervised learning, it is usually more difficult to model object parts than to represent entire objects. For example, object discovery @cite_69 @cite_40 @cite_62 and co-segmentation @cite_61 only require image-level labels without object bounding boxes. Object discovery is mainly implemented by identifying common foreground patterns from the noisy background. People usually consider closed boundaries and common object structure as a strong prior for object discovery.
{ "cite_N": [ "@cite_61", "@cite_40", "@cite_69", "@cite_62" ], "mid": [ "2114542651", "2949820118", "1994488211", "" ], "abstract": [ "We present an algorithm for Interactive Co-segmentation of a foreground object from a group of related images. While previous works in co-segmentation have focussed on unsupervised co-segmentation, we use successful ideas from the interactive object-cutout literature. We develop an algorithm that allows users to decide what foreground is, and then guide the output of the co-segmentation algorithm towards it via scribbles. Interestingly, keeping a user in the loop leads to simpler and highly parallelizable energy functions, allowing us to work with significantly more images per group. However, unlike the interactive single-image counterpart, a user cannot be expected to exhaustively examine all cutouts (from tens of images) returned by the system to make corrections. Hence, we propose iCoseg, an automatic recommendation system that intelligently recommends where the user should scribble next. We introduce and make publicly available the largest co-segmentation dataset yet, the CMU-Cornell iCoseg dataset, with 38 groups, 643 images, and pixelwise hand-annotated groundtruth. Through machine experiments and real user studies with our developed interface, we show that iCoseg can intelligently recommend regions to scribble on, and users following these recommendations can achieve good quality cutouts with significantly lower time and effort than exhaustively examining all cutouts.", "Part models of object categories are essential for challenging recognition tasks, where differences in categories are subtle and only reflected in appearances of small parts of the object. We present an approach that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning. The key idea is to find constellations of neural activation patterns computed using convolutional neural networks. In our experiments, we outperform existing approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford PETS, and Oxford Flowers dataset in case no part or bounding box annotations are available and achieve state-of-the-art performance for the Stanford Dog dataset. We also show the benefits of neural constellation models as a data augmentation technique for fine-tuning. Furthermore, our paper unites the areas of generic and fine-grained classification, since our approach is suitable for both scenarios. The source code of our method is available online at this http URL", "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.", "" ] }
1812.07996
2904935312
In this paper, we present a method to mine object-part patterns from conv-layers of a pre-trained convolutional neural network (CNN). The mined object-part patterns are organized by an And-Or graph (AOG). This interpretable AOG representation consists of a four-layer semantic hierarchy, i.e., semantic parts, part templates, latent patterns, and neural units. The AOG associates each object part with certain neural units in feature maps of conv-layers. The AOG is constructed in a weakly-supervised manner, i.e., very few annotations (e.g., 3-20) of object parts are used to guide the learning of AOGs. We develop a question-answering (QA) method that uses active human-computer communications to mine patterns from a pre-trained CNN, in order to incrementally explain more features in conv-layers. During the learning process, our QA method uses the current AOG for part localization. The QA method actively identifies objects, whose feature maps cannot be explained by the AOG. Then, our method asks people to annotate parts on the unexplained objects, and uses answers to discover CNN patterns corresponding to the newly labeled parts. In this way, our method gradually grows new branches and refines existing branches on the AOG to semanticize CNN representations. In experiments, our method exhibited a high learning efficiency. Our method used about 1 6-1 3 of the part annotations for training, but achieved similar or better part-localization performance than fast-RCNN methods.
There are two key points to differentiate our study from conventional part-detection approaches. First, most detection methods deal with classification problems, but inspired by graph mining @cite_2 @cite_11 @cite_13 , we mainly focus on a mining problem. we aim to discover meaningful latent patterns to clarify CNN representations. Second, instead of summarizing common knowledge from massive annotations, our method requires very limited supervision to mine latent patterns.
{ "cite_N": [ "@cite_11", "@cite_13", "@cite_2" ], "mid": [ "2267009405", "", "2210113682" ], "abstract": [ "We categorize this research in terms of its contribution to both graph theory and computer vision. From the theoretical perspective, this study can be considered as the first attempt to formulate the idea of mining maximal frequent subgraphs in the challenging domain of messy visual data, and as a conceptual extension to the unsupervised learning of graph matching. We define a soft attributed pattern (SAP) to represent the common subgraph pattern among a set of attributed relational graphs (ARGs), considering both their structure and attributes. Regarding the differences between ARGs with fuzzy attributes and conventional labeled graphs, we propose a new mining strategy that directly extracts the SAP with the maximal graph size without applying node enumeration. Given an initial graph template and a number of ARGs, we develop an unsupervised method to modify the graph template into the maximal-size SAP. From a practical perspective, this research develops a general platform for learning the category model (i.e., the SAP) from cluttered visual data (i.e., the ARGs) without labeling “what is where,” thereby opening the possibility for a series of applications in the era of big visual data. Experiments demonstrate the superior performance of the proposed method on RGB RGB-D images and videos.", "", "This paper reformulates the theory of graph mining on the technical basis of graph matching, and extends its scope of applications to computer vision. Given a set of attributed relational graphs (ARGs), we propose to use a hierarchical And-Or Graph (AoG) to model the pattern of maximal-size common subgraphs embedded in the ARGs, and we develop a general method to mine the AoG model from the unlabeled ARGs. This method provides a general solution to the problem of mining hierarchical models from unannotated visual data without exhaustive search of objects. We apply our method to RGB RGB-D images and videos to demonstrate its generality and the wide range of applicability. The code will be available at https: sites.google.com site quanshizhang mining-and-or-graphs." ] }
1812.07989
2951745098
Learning fine-grained details is a key issue in image aesthetic assessment. Most of the previous methods extract the fine-grained details via random cropping strategy, which may undermine the integrity of semantic information. Extensive studies show that humans perceive fine-grained details with a mixture of foveal vision and peripheral vision. Fovea has the highest possible visual acuity and is responsible for seeing the details. The peripheral vision is used for perceiving the broad spatial scene and selecting the attended regions for the fovea. Inspired by these observations, we propose a Gated Peripheral-Foveal Convolutional Neural Network (GPF-CNN). It is a dedicated double-subnet neural network, i.e. a peripheral subnet and a foveal subnet. The former aims to mimic the functions of peripheral vision to encode the holistic information and provide the attended regions. The latter aims to extract fine-grained features on these key regions. Considering that the peripheral vision and foveal vision play different roles in processing different visual stimuli, we further employ a gated information fusion (GIF) network to weight their contributions. The weights are determined through the fully connected layers followed by a sigmoid function. We conduct comprehensive experiments on the standard AVA and this http URL datasets for unified aesthetic prediction tasks: (i) aesthetic quality classification; (ii) aesthetic score regression; and (iii) aesthetic score distribution prediction. The experimental results demonstrate the effectiveness of the proposed method.
There is a vast literature on the problem of designing effective features for aesthetic assessment, starting with the seminal work of @cite_10 and leading to recent works of @cite_21 @cite_14 @cite_4 . These features are based on the person's aesthetic perception and photographic rules. For example, Datta @cite_10 extracted @math features to model the photographic technique such as rule of thirds, colorfulness, or saturation. Tang @cite_14 modeled the photographic rules (composition, lighting, and color arrangement) by extracting the visual features according to the variety of photo content. Nishiyama @cite_4 proposed to use the to model the color harmony in aesthetics. Later work proposed by Zhang @cite_39 focused on constructing the small-sized connected graphs to encode the image composition information. However, the above methods with hand-designed features can achieve only limited success because 1) such hand-crafted features cannot be applied to all the image categories since the photographic rules vary considerably among different images. 2) these handcrafted features are heuristic and some photography rules are difficult to be quantified mathematically.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_21", "@cite_39", "@cite_10" ], "mid": [ "2056380823", "", "2047691626", "1976414467", "1511924373" ], "abstract": [ "Automatically assessing photo quality from the perspective of visual aesthetics is of great interest in high-level vision research and has drawn much attention in recent years. In this paper, we propose content-based photo quality assessment using both regional and global features. Under this framework, subject areas, which draw the most attentions of human eyes, are first extracted. Then regional features extracted from both subject areas and background regions are combined with global features to assess photo quality. Since professional photographers adopt different photographic techniques and have different aesthetic criteria in mind when taking different types of photos (e.g., landscape versus portrait), we propose to segment subject areas and extract visual features in different ways according to the variety of photo content. We divide the photos into seven categories based on their visual content and develop a set of new subject area extraction methods and new visual features specially designed for different categories. The effectiveness of this framework is supported by extensive experimental comparisons of existing photo quality assessment approaches as well as our new features on different categories of photos. In addition, we propose an approach of online training an adaptive classifier to combine the proposed features according to the visual content of a test photo without knowing its category. Another contribution of this work is to construct a large and diversified benchmark dataset for the research of photo quality assessment. It includes 17,673 photos with manually labeled ground truth. This new benchmark dataset can be down loaded at http: mmlab.ie.cuhk.edu.hk CUHKPQ Dataset.htm.", "", "Automatically assessing the visual esthetics of images is of great interest in high-level vision research and has drawn much attention in recent years. Traditional methods heavily depend on the performance of subject region extraction. This paper proposes to use semantic features in the esthetic assessment system because they can implicitly represent the image topic and be helpful if the subject region extraction fails. Accordingly, a framework combining the hand-crafting features with semantic features is proposed to evaluate image esthetic quality. The experimental results show that the semantic features can improve the performance of image esthetic assessment.", "Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.", "Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities of photographs is a highly subjective task. Hence, there is no unanimously agreed standard for measuring aesthetic value. In spite of the lack of firm rules, certain features in photographic images are believed, by many, to please humans more than certain others. In this paper, we treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated online photo sharing Website as data source. We extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. Automated classifiers are built using support vector machines and classification trees. Linear regression on polynomial terms of the features is also applied to infer numerical aesthetics ratings. The work attempts to explore the relationship between emotions which pictures arouse in people, and their low-level content. Potential applications include content-based image retrieval and digital photography." ] }
1812.07989
2951745098
Learning fine-grained details is a key issue in image aesthetic assessment. Most of the previous methods extract the fine-grained details via random cropping strategy, which may undermine the integrity of semantic information. Extensive studies show that humans perceive fine-grained details with a mixture of foveal vision and peripheral vision. Fovea has the highest possible visual acuity and is responsible for seeing the details. The peripheral vision is used for perceiving the broad spatial scene and selecting the attended regions for the fovea. Inspired by these observations, we propose a Gated Peripheral-Foveal Convolutional Neural Network (GPF-CNN). It is a dedicated double-subnet neural network, i.e. a peripheral subnet and a foveal subnet. The former aims to mimic the functions of peripheral vision to encode the holistic information and provide the attended regions. The latter aims to extract fine-grained features on these key regions. Considering that the peripheral vision and foveal vision play different roles in processing different visual stimuli, we further employ a gated information fusion (GIF) network to weight their contributions. The weights are determined through the fully connected layers followed by a sigmoid function. We conduct comprehensive experiments on the standard AVA and this http URL datasets for unified aesthetic prediction tasks: (i) aesthetic quality classification; (ii) aesthetic score regression; and (iii) aesthetic score distribution prediction. The experimental results demonstrate the effectiveness of the proposed method.
Early attempts in image aesthetic assessment cast this problem as a classification problem, such as @cite_42 @cite_24 @cite_37 @cite_41 @cite_20 . They classified the images into high or low aesthetic quality based on the threshold of the weighted mean scores of human rating. Other research such as @cite_2 @cite_8 used the regression model to predict the aesthetic score. However, the image aesthetic quality assessment is highly subjective. The rated scores of different people may differ greatly due to the cultural background. Thus a scalar value is insufficient to provide the degree of consensus or diversity of opinion among annotators @cite_19 . Considering this, recent research focuses on directly predicting the label distribution of the scores. @cite_19 , Jin proposed a new CJS loss to predict the aesthetic label distribution. Murray @cite_5 used the Huber loss to predict the aesthetic score distribution. But they predicted each discrete probability independently. Talebi @cite_15 treated the score distribution as ordered classes and used squared EMD loss to predict the score distributions. In this paper, similar with @cite_15 , we optimize our networks by minimizing EMD loss.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_41", "@cite_42", "@cite_24", "@cite_19", "@cite_2", "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "2514622527", "2515223471", "", "2048835603", "2806817979", "2749958701", "2417288846", "", "2754213847", "2217895792" ], "abstract": [ "Human beings often assess the aesthetic quality of an image coupled with the identification of the image’s semantic content. This paper addresses the correlation issue between automatic aesthetic quality assessment and semantic recognition. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition task offers the key to address this problem. Based on convolutional neural networks, we employ a single and simple multi-task framework to efficiently utilize the supervision of aesthetic and semantic labels. A correlation item between these two tasks is further introduced to the framework by incorporating the inter-task relationship learning. This item not only provides some useful insight about the correlation but also improves assessment accuracy of the aesthetic task. In particular, an effective strategy is developed to keep a balance between the two tasks, which facilitates to optimize the parameters of the framework. Extensive experiments on the challenging Aesthetic Visual Analysis dataset and Photo.net dataset validate the importance of semantic recognition in aesthetic quality assessment, and demonstrate that multitask deep models can discover an effective aesthetic representation to achieve the state-of-the-art results.", "Convolutional Neural Networks (CNNs) have been widely adopted for many imaging applications. For image aesthetics prediction, state-of-the-art algorithms train CNNs on a recently-published large-scale dataset, AVA. However, the distribution of the aesthetic scores on this dataset is extremely unbalanced, which limits the prediction capability of existing methods. We overcome such limitation by using weighted CNNs. We train a regression model that improves the prediction accuracy of the aesthetic scores over state-of-the-art algorithms. In addition, we propose a novel histogram prediction model that not only predicts the aesthetic score, but also estimates the difficulty of performing aesthetics assessment for an input image. We further show an image enhancement application where we obtain an aesthetically pleasing crop of an input image using our regression model.", "", "In this paper, we automatically assess the aesthetic properties of images. In the past, this problem has been addressed by hand-crafting features which would correlate with best photographic practices (e.g. “Does this image respect the rule of thirds?”) or with photographic techniques (e.g. “Is this image a macro?”). We depart from this line of research and propose to use generic image descriptors to assess aesthetic quality. We experimentally show that the descriptors we use, which aggregate statistics computed from low-level local features, implicitly encode the aesthetic properties explicitly used by state-of-the-art methods and outperform them by a significant margin.", "The ability to rank the images based on their appearance finds many real-world applications, such as image retrieval or image album creation. Despite the recent dominance of deep learning methods in computer vision which often result in superior performance, they are not always the methods of choice because they lack interpretability. In this paper, we investigate the possibility of improving the image aesthetic inference of the convolutional neural networks with hand-designed features that rely on domain expertise in various fields. We perform a comparison of hand-crafted feature sets in their ability to predict fine-grained aesthetics scores on two image aesthetics data sets. We observe that even feature sets published earlier are able to compete with more recently published algorithms and, by combining the algorithms, a significant improvement in predicting image aesthetics is possible. By using a tree-based learner, we perform the feature elimination to understand the best performing features overall and across different image categories. Only roughly 15 and 8 of the features are needed to achieve full performance in predicting a fine-grained aesthetic score and binary classification, respectively. By combining the hand-crafted features with metafeatures that predict the quality of an image based on convolutional neural network features, the model performs better than a baseline VGG16 model. One can, however, achieve more significant improvement in both aesthetics score prediction and binary classification by fusing the hand-crafted features and the penultimate layer activations. Our experiments indicate an improvement up to 2.2 achieving current state-of-the-art binary classification accuracy on the aesthetic visual analysis data set when the hand-designed features are fused with activations from VGG16 and ResNet50 networks.", "Aesthetic quality prediction is a challenging task in the computer vision community because of the complex interplay with semantic contents and photographic technologies. Recent studies on the powerful deep learning based aesthetic quality assessment usually use a binary high-low label or a numerical score to represent the aesthetic quality. However the scalar representation cannot describe well the underlying varieties of the human perception of aesthetics. In this work, we propose to predict the aesthetic score distribution (i.e., a score distribution vector of the ordinal basic human ratings) using Deep Convolutional Neural Network (DCNN). Conventional DCNNs which aim to minimize the difference between the predicted scalar numbers or vectors and the ground truth cannot be directly used for the ordinal basic rating distribution. Thus, a novel CNN based on the Cumulative distribution with Jensen-Shannon divergence (CJS-CNN) is presented to predict the aesthetic score distribution of human ratings, with a new reliability-sensitive learning method based on the kurtosis of the score distribution, which eliminates the requirement of the original full data of human ratings (without normalization). Experimental results on large scale aesthetic dataset demonstrate the effectiveness of our introduced CJS-CNN in this task.", "Real-world applications could benefit from the ability to automatically generate a fine-grained ranking of photo aesthetics. However, previous methods for image aesthetics analysis have primarily focused on the coarse, binary categorization of images into high- or low-aesthetic categories. In this work, we propose to learn a deep convolutional neural network to rank photo aesthetics in which the relative ranking of photo aesthetics are directly modeled in the loss function. Our model incorporates joint learning of meaningful photographic attributes and image content information which can help regularize the complicated photo aesthetics rating problem.", "", "Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications, such as evaluating image capture pipelines, storage techniques, and sharing media. Despite the subjective nature of this problem, most existing methods only predict the mean opinion score provided by data sets, such as AVA and TID2013. Our approach differs from others in that we predict the distribution of human opinion scores using a convolutional neural network. Our architecture also has the advantage of being significantly simpler than other methods with comparable performance. Our proposed approach relies on the success (and retraining) of proven, state-of-the-art deep object recognition networks. Our resulting network can be used to not only score images reliably and with high correlation to human perception, but also to assist with adaptation and optimization of photo editing enhancement algorithms in a photographic pipeline. All this is done without need for a “golden” reference image, consequently allowing for single-image, semantic- and perceptually-aware, no-reference quality assessment.", "This paper investigates problems of image style, aesthetics, and quality estimation, which require fine-grained details from high-resolution images, utilizing deep neural network training approach. Existing deep convolutional neural networks mostly extracted one patch such as a down-sized crop from each image as a training example. However, one patch may not always well represent the entire image, which may cause ambiguity during training. We propose a deep multi-patch aggregation network training approach, which allows us to train models using multiple patches generated from one image. We achieve this by constructing multiple, shared columns in the neural network and feeding multiple patches to each of the columns. More importantly, we propose two novel network layers (statistics and sorting) to support aggregation of those patches. The proposed deep multi-patch aggregation network integrates shared feature learning and aggregation function learning into a unified framework. We demonstrate the effectiveness of the deep multi-patch aggregation network on the three problems, i.e., image style recognition, aesthetic quality categorization, and image quality estimation. Our models trained using the proposed networks significantly outperformed the state of the art in all three applications." ] }
1812.07754
2905006172
Voice-enabled commercial products are ubiquitous, typically enabled by lightweight on-device keyword spotting (KWS) and full automatic speech recognition (ASR) in the cloud. ASR systems require significant computational resources in training and for inference, not to mention copious amounts of annotated speech data. KWS systems, on the other hand, are less resource-intensive but have limited capabilities. On the Comcast Xfinity X1 entertainment platform, we explore a middle ground between ASR and KWS: We introduce a novel, resource-efficient neural network for voice query recognition that is much more accurate than state-of-the-art CNNs for KWS, yet can be easily trained and deployed with limited resources. On an evaluation dataset representing the top 200 voice queries, we achieve a low false alarm rate of 1 and a query error rate of 6 . Our model performs inference 8.24x faster than the current ASR system.
The typical approach to voice query recognition is to develop a full automatic speech recognition (ASR) system @cite_16 . Open-source toolkits like Kaldi @cite_4 provide ASR models to researchers; however, state-of-the-art commercial systems frequently require thousands of hours of training data @cite_6 and dozens of gigabytes for the combined acoustic and language models @cite_15 . Furthermore, we argue that these systems are excessive for usage scenarios characterized by Zipf's Law, such as those often encountered in voice query recognition: for example, on the X1, the top 200 queries cover a significant, disproportionate amount of our entire voice traffic. Thus, to reduce computational requirements associated with training and running a full ASR system, we propose to develop a lightweight model for handling the top-K queries only.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_4", "@cite_6" ], "mid": [ "2775304348", "2327501763", "1524333225", "2519224033" ], "abstract": [ "Attention-based encoder-decoder architectures such as Listen, Attend, and Spell (LAS), subsume the acoustic, pronunciation and language model components of a traditional automatic speech recognition (ASR) system into a single neural network. In previous work, we have shown that such architectures are comparable to state-of-theart ASR systems on dictation tasks, but it was not clear if such architectures would be practical for more challenging tasks such as voice search. In this work, we explore a variety of structural and optimization improvements to our LAS model which significantly improve performance. On the structural side, we show that word piece models can be used instead of graphemes. We also introduce a multi-head attention architecture, which offers improvements over the commonly-used single-head attention. On the optimization side, we explore synchronous training, scheduled sampling, label smoothing, and minimum word error rate optimization, which are all shown to improve accuracy. We present results with a unidirectional LSTM encoder for streaming recognition. On a 12, 500 hour voice search task, we find that the proposed changes improve the WER from 9.2 to 5.6 , while the best conventional system achieves 6.7 ; on a dictation task our model achieves a WER of 4.1 compared to 5 for the conventional system.", "We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1 without a dictionary or an external language model and 10.3 with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0 on the same set.", "We describe the design of Kaldi, a free, open-source toolkit for speech recognition research. Kaldi provides a speech recognition system based on finite-state automata (using the freely available OpenFst), together with detailed documentation and a comprehensive set of scripts for building complete recognition systems. Kaldi is written is C++, and the core library supports modeling of arbitrary phonetic-context sizes, acoustic modeling with subspace Gaussian mixture models (SGMM) as well as standard Gaussian mixture models, together with all commonly used linear and affine transforms. Kaldi is released under the Apache License v2.0, which is highly nonrestrictive, making it suitable for a wide community of users.", "We describe Microsoft's conversational speech recognition system, in which we combine recent developments in neural-network-based acoustic and language modeling to advance the state of the art on the Switchboard recognition task. Inspired by machine learning ensemble techniques, the system uses a range of convolutional and recurrent neural networks. I-vector modeling and lattice-free MMI training provide significant gains for all acoustic model architectures. Language model rescoring with multiple forward and backward running RNNLMs, and word posterior-based system combination provide a 20 boost. The best single system uses a ResNet architecture acoustic model with RNNLM rescoring, and achieves a word error rate of 6.9 on the NIST 2000 Switchboard task. The combined system has an error rate of 6.2 , representing an improvement over previously reported results on this benchmark task." ] }
1812.07869
2904086477
Although a wide variety of deep neural networks for robust Visual Odometry (VO) can be found in the literature, they are still unable to solve the drift problem in long-term robot navigation. Thus, this paper aims to propose novel deep end-to-end networks for long-term 6-DoF VO task. It mainly fuses relative and global networks based on Recurrent Convolutional Neural Networks (RCNNs) to improve the monocular localization accuracy. Indeed, the relative sub-networks are implemented to smooth the VO trajectory, while global subnetworks are designed to avoid drift problem. All the parameters are jointly optimized using Cross Transformation Constraints (CTC), which represents temporal geometric consistency of the consecutive frames, and Mean Square Error (MSE) between the predicted pose and ground truth. The experimental results on both indoor and outdoor datasets show that our method outperforms other state-of-the-art learning-based VO methods in terms of pose accuracy.
most feature-based methods work by detecting feature points and matching them between consecutive frames. To improve pose accuracy, it minimizes the projective geometry errors between 3D feature points of the scene and their projection on the image plane, e.g., PTAM @cite_4 is a classical vSLAM system. However, it may suffer from drift since it does not address the principle of loop closing. More recently, the ORB-SLAM algorithm by Mur- @cite_14 is state-of-the-art vSLAM system designed for sparse feature tracking and reached impressive robustness and accuracy. In practice, it also suffers from a number of problems such as the inconsistency in initialization, and the drift caused by pure rotation.
{ "cite_N": [ "@cite_14", "@cite_4" ], "mid": [ "1612997784", "2151290401" ], "abstract": [ "This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.", "This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems." ] }
1812.07869
2904086477
Although a wide variety of deep neural networks for robust Visual Odometry (VO) can be found in the literature, they are still unable to solve the drift problem in long-term robot navigation. Thus, this paper aims to propose novel deep end-to-end networks for long-term 6-DoF VO task. It mainly fuses relative and global networks based on Recurrent Convolutional Neural Networks (RCNNs) to improve the monocular localization accuracy. Indeed, the relative sub-networks are implemented to smooth the VO trajectory, while global subnetworks are designed to avoid drift problem. All the parameters are jointly optimized using Cross Transformation Constraints (CTC), which represents temporal geometric consistency of the consecutive frames, and Mean Square Error (MSE) between the predicted pose and ground truth. The experimental results on both indoor and outdoor datasets show that our method outperforms other state-of-the-art learning-based VO methods in terms of pose accuracy.
in contrast, direct methods estimate the camera motion by minimizing the photometric error over all pixels across consecutive images. Engel at al. @cite_9 developed LSD-SLAM, which is one of the most successful direct approaches. Direct methods do not provide better tolerance towards changing lighting conditions and often require more computational costs than feature-based methods since they work a global minimization using all the pixels in the image.
{ "cite_N": [ "@cite_9" ], "mid": [ "612478963" ], "abstract": [ "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU." ] }
1812.07869
2904086477
Although a wide variety of deep neural networks for robust Visual Odometry (VO) can be found in the literature, they are still unable to solve the drift problem in long-term robot navigation. Thus, this paper aims to propose novel deep end-to-end networks for long-term 6-DoF VO task. It mainly fuses relative and global networks based on Recurrent Convolutional Neural Networks (RCNNs) to improve the monocular localization accuracy. Indeed, the relative sub-networks are implemented to smooth the VO trajectory, while global subnetworks are designed to avoid drift problem. All the parameters are jointly optimized using Cross Transformation Constraints (CTC), which represents temporal geometric consistency of the consecutive frames, and Mean Square Error (MSE) between the predicted pose and ground truth. The experimental results on both indoor and outdoor datasets show that our method outperforms other state-of-the-art learning-based VO methods in terms of pose accuracy.
Learning-based relocalization systems are designed to learn from recognition to relocalization with very large scale classification datasets. For example, proposed PoseNet @cite_16 , which was the first successful end-to-end pre-trained deep CNNs approach for 6-DoF pose regression. In addition, @cite_7 introduced deep CNNs with Long-Short Term Memory (LSTM) units to avoid overfitting to training data while PoseNet needs to deal with this problem with careful dropout strategies.
{ "cite_N": [ "@cite_16", "@cite_7" ], "mid": [ "2200124539", "2584731199" ], "abstract": [ "We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 3 degrees accuracy for large scale outdoor scenes and 0.5m and 5 degrees accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show that the PoseNet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples.", "In this work we propose a new CNN+LSTM architecture for camera pose regression for indoor and outdoor scenes. CNNs allow us to learn suitable feature representations for localization that are robust against motion blur and illumination changes. We make use of LSTM units on the CNN output, which play the role of a structured dimensionality reduction on the feature vector, leading to drastic improvements in localization performance. We provide extensive quantitative comparison of CNN-based and SIFT-based localization methods, showing the weaknesses and strengths of each. Furthermore, we present a new large-scale indoor dataset with accurate ground truth from a laser scanner. Experimental results on both indoor and outdoor public datasets show our method outperforms existing deep architectures, and can localize images in hard conditions, e.g., in the presence of mostly textureless surfaces, where classic SIFT-based methods fail." ] }
1812.07869
2904086477
Although a wide variety of deep neural networks for robust Visual Odometry (VO) can be found in the literature, they are still unable to solve the drift problem in long-term robot navigation. Thus, this paper aims to propose novel deep end-to-end networks for long-term 6-DoF VO task. It mainly fuses relative and global networks based on Recurrent Convolutional Neural Networks (RCNNs) to improve the monocular localization accuracy. Indeed, the relative sub-networks are implemented to smooth the VO trajectory, while global subnetworks are designed to avoid drift problem. All the parameters are jointly optimized using Cross Transformation Constraints (CTC), which represents temporal geometric consistency of the consecutive frames, and Mean Square Error (MSE) between the predicted pose and ground truth. The experimental results on both indoor and outdoor datasets show that our method outperforms other state-of-the-art learning-based VO methods in terms of pose accuracy.
learning-based visual odometry systems are employed to learn the incremental change in position from images. LS-VO @cite_5 is a CNNs architecture proposed to learn the latent space representation of the input Optical Flow field with the motion estimate task. SfM-Net @cite_22 is a self-supervised geometry-aware CNNs for motion estimation in videos that decomposes frame-to-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Recently, most state-of-the-art deep approaches to visual odometry employ not only CNNs, but also sequence-models, such as long-short term memory (LSTM) units @cite_0 , to capture long term dependencies in camera motion.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_22" ], "mid": [ "2796635488", "2755457237", "2608018946" ], "abstract": [ "With the success of deep learning based approaches in tackling challenging problems in computer vision, a wide range of deep architectures have recently been proposed for the task of visual odometry (VO) estimation. Most of these proposed solutions rely on supervision, which requires the acquisition of precise ground-truth camera pose information, collected using expensive motion capture systems or high-precision IMU GPS sensor rigs. In this work, we propose an unsupervised paradigm for deep visual odometry learning. We show that using a noisy teacher, which could be a standard VO pipeline, and by designing a loss term that enforces geometric consistency of the trajectory, we can train accurate deep models for VO that do not require ground-truth labels. We leverage geometry as a self-supervisory signal and propose \"Composite Transformation Constraints (CTCs)\", that automatically generate supervisory signals for training and enforce geometric consistency in the VO estimate. We also present a method of characterizing the uncertainty in VO estimates thus obtained. To evaluate our VO pipeline, we present exhaustive ablation studies that demonstrate the efficacy of end-to-end, self-supervised methodologies to train deep models for monocular VO. We show that leveraging concepts from geometry and incorporating them into the training of a recurrent neural network results in performance competitive to supervised deep VO methods.", "This work proposes a novel deep network architecture to solve the camera ego-motion estimation problem. A motion estimation network generally learns features similar to optical flow (OF) fields starting from sequences of images. This OF can be described by a lower dimensional latent space. Previous research has shown how to find linear approximations of this space. We propose to use an autoencoder network to find a nonlinear representation of the OF manifold. In addition, we propose to learn the latent space jointly with the estimation task, so that the learned OF features become a more robust description of the OF input. We call this novel architecture latent space visual odometry (LS-VO). The experiments show that LS-VO achieves a considerable increase in performances with respect to baselines, while the number of parameters of the estimation network only slightly increases.", "We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frame-to-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the re-projection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfM-Net extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided." ] }
1812.07869
2904086477
Although a wide variety of deep neural networks for robust Visual Odometry (VO) can be found in the literature, they are still unable to solve the drift problem in long-term robot navigation. Thus, this paper aims to propose novel deep end-to-end networks for long-term 6-DoF VO task. It mainly fuses relative and global networks based on Recurrent Convolutional Neural Networks (RCNNs) to improve the monocular localization accuracy. Indeed, the relative sub-networks are implemented to smooth the VO trajectory, while global subnetworks are designed to avoid drift problem. All the parameters are jointly optimized using Cross Transformation Constraints (CTC), which represents temporal geometric consistency of the consecutive frames, and Mean Square Error (MSE) between the predicted pose and ground truth. The experimental results on both indoor and outdoor datasets show that our method outperforms other state-of-the-art learning-based VO methods in terms of pose accuracy.
More recently, learning-based global and relative networks are designed for 6-DoF global pose regression and odometry estimation from consecutive monocular images. VLocNet @cite_19 was a fusion architecture incorporates a global pose regression sub-networks and a Siamese-type relative pose estimation sub-networks. It takes two consecutive monocular images as input and jointly regress the 6-DoF global pose as well as the 6-DoF relative pose between the images. @cite_11 proposed a MapNet that enforces geometric constraints between relative poses and absolute poses in network training. @cite_21 have presented a CNNs+Bi-LSTMs approach for 6-DoF video-clip relocalization that exploits the temporal smoothness of the video stream to improve the localization accuracy of the global pose estimation. According to those recent studies, in this paper we consider ways in which we can leverage the camera re-localization to improve the accuracy of 6-DoF image-sequences.
{ "cite_N": [ "@cite_19", "@cite_21", "@cite_11" ], "mid": [ "2789698879", "2749379418", "" ], "abstract": [ "Localization is an indispensable component of a robot's autonomy stack that enables it to determine where it is in the environment, essentially making it a precursor for any action execution or planning. Although convolutional neural networks have shown promising results for visual localization, they are still grossly outperformed by state-of-the-art local feature-based techniques. In this work, we propose VLocNet, a new convolutional neural network architecture for 6-DoF global pose regression and odometry estimation from consecutive monocular images. Our multitask model incorporates hard parameter sharing, thus being compact and enabling real-time inference, in addition to being end-to-end trainable. We propose a novel loss function that utilizes auxiliary learning to leverage relative pose information during training, thereby constraining the search space to obtain consistent pose estimates. We evaluate our proposed VLocNet on indoor as well as outdoor datasets and show that even our single task model exceeds the performance of state-of-the-art deep architectures for global localization, while achieving competitive performance for visual odometry estimation. Furthermore, we present extensive experimental evaluations utilizing our proposed Geometric Consistency Loss that show the effectiveness of multitask learning and demonstrate that our model is the first deep learning technique to be on par with, and in some cases outperforms state-of-the-art SIFT-based approaches.", "Machine learning techniques, namely convolutional neural networks (CNN) and regression forests, have recently shown great promise in performing 6-DoF localization of monocular images. However, in most cases image-sequences, rather only single images, are readily available. To this extent, none of the proposed learning-based approaches exploit the valuable constraint of temporal smoothness, often leading to situations where the per-frame error is larger than the camera motion. In this paper we propose a recurrent model for performing 6-DoF localization of video-clips. We find that, even by considering only short sequences (20 frames), the pose estimates are smoothed and the localization error can be drastically reduced. Finally, we consider means of obtaining probabilistic pose estimates from our model. We evaluate our method on openly-available real-world autonomous driving and indoor localization datasets.", "" ] }
1812.07762
2905141971
Deep learning has improved many computer vision tasks by utilizing data-driven features instead of using hand-crafted features. However, geometric transformations of input images often degrade the performance of deep learning based methods. In particular, rotation-invariant features are important in computer vision tasks such as face detection, biological feature detection of microscopy images, or robot grasp detection since the input image can be fed into the network with any rotation angle. In this paper, we propose rotation ensemble module (REM) to efficiently train and utilize rotation-invariant features in a deep neural network for computer vision tasks. We evaluated our proposed REM with face detection tasks on FDDB dataset, robotic grasp detection tasks on Cornell dataset, and real robotic grasp tasks with several novel objects. REM based face detection deep neural networks yielded up to 50.8 accuracy in face detection task on FDDB dataset at false rate 20 with IOU 75 , which is about 10.7 higher than the baseline. Robotic grasp detection deep neural networks with our REM also yielded up to 97.6 accuracy in robotic grasp detection on Cornell dataset that is higher than current state-of-the-art performance. In robotic grasp task using a real 4-axis robotic arm with several novel objects, our REM based robotic grasp achieved up to 93.8 , which is significantly higher than the baseline robotic grasps (11.0-56.3 ).
. In general, max pooling layers in convolutional neural networks (CNN) are required to alleviate the issue of spatial variance in CNN. Assuming that spatial invariance is important for image classification, Jaderberg proposed spatial transformer network (STN), a method of image (or feature) transformation by learning (affine) transformation parameters so that it can help to improve the performance of inference operations of the following neural network layers @cite_2 . Lin proposed to use STN repeatedly with an inverse composite method by propagating warp parameters rather than image intensities (or features) unlike the original STN and it yielded improved performance over STN @cite_25 .
{ "cite_N": [ "@cite_25", "@cite_2" ], "mid": [ "2562066862", "2951005624" ], "abstract": [ "In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs). STNs are of interest to the vision and learning communities due to their natural ability to combine alignment and classification within the same theoretical framework. Inspired by the Inverse Compositional (IC) variant of the LK algorithm, we present Inverse Compositional Spatial Transformer Networks (IC-STNs). We demonstrate that IC-STNs can achieve better performance than conventional STNs with less model capacity, in particular, we show superior performance in pure image alignment tasks as well as joint alignment classification problems on real-world problems.", "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations." ] }
1812.07762
2905141971
Deep learning has improved many computer vision tasks by utilizing data-driven features instead of using hand-crafted features. However, geometric transformations of input images often degrade the performance of deep learning based methods. In particular, rotation-invariant features are important in computer vision tasks such as face detection, biological feature detection of microscopy images, or robot grasp detection since the input image can be fed into the network with any rotation angle. In this paper, we propose rotation ensemble module (REM) to efficiently train and utilize rotation-invariant features in a deep neural network for computer vision tasks. We evaluated our proposed REM with face detection tasks on FDDB dataset, robotic grasp detection tasks on Cornell dataset, and real robotic grasp tasks with several novel objects. REM based face detection deep neural networks yielded up to 50.8 accuracy in face detection task on FDDB dataset at false rate 20 with IOU 75 , which is about 10.7 higher than the baseline. Robotic grasp detection deep neural networks with our REM also yielded up to 97.6 accuracy in robotic grasp detection on Cornell dataset that is higher than current state-of-the-art performance. In robotic grasp task using a real 4-axis robotic arm with several novel objects, our REM based robotic grasp achieved up to 93.8 , which is significantly higher than the baseline robotic grasps (11.0-56.3 ).
Esteves proposed a rotation-invariant network by replacing the grid generation part in STN with a polar transform @cite_15 . They transformed the input feature map (or image) into the polar coordinate with the origin that was determined by the center of mass. They showed that polar coordinate allows to predict parameters more stably than the original STN. Cohen and Welling proposed a method to use group equivariant convolutions and group pooling with weight flipping and four rotations with @math 2 stepsize @cite_18 . Patrick proposed to use rotational invariant features that were created using rotational convolutions and pooling layers @cite_19 . By back-rotating features, they proposed a way to return negative angles. Diego proposed RotEqNet with a different set of weights for each local window, without weight rotation @cite_20 .
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_18", "@cite_20" ], "mid": [ "2801243570", "2751473119", "2952054889", "" ], "abstract": [ "Despite breakthroughs in image classification due to the evolution of deep learning and, in particular, convolutional neural networks (CNNs), state-of-the-art models only possess a very limited amount of rotational invariance. Known workarounds include artificial rotations of the training data or ensemble approaches, where several models are evaluated. These approaches either increase the workload of the training or inflate the number of parameters. Further approaches add rotational invariance by globally pooling over rotationally equivariant features. Instead, we propose to incorporate rotational invariance into the feature-extraction part of the CNN directly. This allows to train on unrotated data and perform well on a rotated test set. We use rotational convolutions and introduce a rotational pooling layer that performs a pooling over the back-rotated output feature maps. We show that when training on the original, unrotated MNIST training dataset, but evaluating on rotations of the MNIST test dataset, the error rate can be reduced substantially from 58.20 to 12.20 . Similar results are shown for the CIFAR-10 and CIFAR-100 datasets.", "Convolutional neural networks (CNNs) are inherently equivariant to translation. Efforts to embed other forms of equivariance have concentrated solely on rotation. We expand the notion of equivariance in CNNs through the Polar Transformer Network (PTN). PTN combines ideas from the Spatial Transformer Network (STN) and canonical coordinate representations. The result is a network invariant to translation and equivariant to both rotation and scale. PTN is trained end-to-end and composed of three distinct stages: a polar origin predictor, the newly introduced polar transformer module and a classifier. PTN achieves state-of-the-art on rotated MNIST and the newly introduced SIM2MNIST dataset, an MNIST variation obtained by adding clutter and perturbing digits with translation, rotation and scaling. The ideas of PTN are extensible to 3D which we demonstrate through the Cylindrical Transformer Network", "We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CIFAR10 and rotated MNIST.", "" ] }
1812.07760
2951129992
A safe and robust on-road navigation system is a crucial component of achieving fully automated vehicles. NVIDIA recently proposed an End-to-End algorithm that can directly learn steering commands from raw pixels of a front camera by using one convolutional neural network. In this paper, we leverage auxiliary information aside from raw images and design a novel network structure, called Auxiliary Task Network (ATN), to help boost the driving performance while maintaining the advantage of minimal training data and an End-to-End training method. In this network, we introduce human prior knowledge into vehicle navigation by transferring features from image recognition tasks. Image semantic segmentation is applied as an auxiliary task for navigation. We consider temporal information by introducing an LSTM module and optical flow to the network. Finally, we combine vehicle kinematics with a sensor fusion step. We discuss the benefits of our method over state-of-the-art visual navigation methods both in the Udacity simulation environment and on the real-world Comma.ai dataset.
Deep neural networks have been proven to be very successful in many fields. Recently a lot of work focuses on applying deep networks to learn driving policies from human demonstrations. One of the earliest attempts originates from ALVINN @cite_16 , which used a neural network to directly map front-view camera images to steering angle. NVIDIA @cite_7 recently extended this approach with deep neural networks to demonstrate lane following in more road scenarios. Other concurrent examples of learning End-to-End control of self-driving vehicles include @cite_5 @cite_0 . These works emphasize an End-to-End learning procedure by using a single deep network to learn driving policy and not requiring further architecture design or human prior knowledge. However, this has resulted in problems of data inefficiency and bottlenecks in learning more complex driving behavior.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_16", "@cite_7" ], "mid": [ "", "2740067745", "2167224731", "2342840547" ], "abstract": [ "", "Lane keeping is an important feature for self-driving cars. This paper presents an end-to-end learning approach to obtain the proper steering angle to maintain the car in the lane. The convolutional neural network (CNN) model takes raw image frames as input and outputs the steering angles accordingly. The model is trained and evaluated using the comma.ai dataset, which contains the front view image frames and the steering angle data captured when driving on the road. Unlike the traditional approach that manually decomposes the autonomous driving problem into technical components such as lane detection, path planning and steering control, the end-to-end model can directly steer the vehicle from the front view camera data after training. It learns how to keep in lane from human driving data. Further discussion of this end-to-end approach and its limitation are also provided.", "ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.", "We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS)." ] }
1812.07760
2951129992
A safe and robust on-road navigation system is a crucial component of achieving fully automated vehicles. NVIDIA recently proposed an End-to-End algorithm that can directly learn steering commands from raw pixels of a front camera by using one convolutional neural network. In this paper, we leverage auxiliary information aside from raw images and design a novel network structure, called Auxiliary Task Network (ATN), to help boost the driving performance while maintaining the advantage of minimal training data and an End-to-End training method. In this network, we introduce human prior knowledge into vehicle navigation by transferring features from image recognition tasks. Image semantic segmentation is applied as an auxiliary task for navigation. We consider temporal information by introducing an LSTM module and optical flow to the network. Finally, we combine vehicle kinematics with a sensor fusion step. We discuss the benefits of our method over state-of-the-art visual navigation methods both in the Udacity simulation environment and on the real-world Comma.ai dataset.
One way to overcome these problems is to train on a larger dataset. @cite_2 scaled this effort to a larger crowd-sourced dataset and proposed the FCN-LSTM architecture to derive a generic driving model. Another way is to set intermediate goals for the self-driving problem. @cite_3 and Al- @cite_20 map images to a number of key perception indicators, which they call affordance. The affordance is later associated with actions by hand-designed rules. Some other approaches include applying an attention model in learning to drive @cite_18 @cite_21 and using hierarchical structures to learn meta-driving policies @cite_19 .
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_3", "@cite_19", "@cite_2", "@cite_20" ], "mid": [ "2963016445", "", "2953248129", "2767328598", "2559767995", "2740520864" ], "abstract": [ "Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-tointerpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers etc., can understand what triggered a particular behavior. Here we explore the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network’s output (steering control). Our approach is two-stage. In the first stage, we use a visual attention model to train a convolution network endto- end from images to steering angle. The attention model highlights image regions that potentially influence the network’s output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network’s behavior. We demonstrate the effectiveness of our model on three datasets totaling 16 hours of driving. We first show that training with attention does not degrade the performance of the end-to-end network. Then we show that the network causally cues on a variety of features that are used by humans while driving.", "", "Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.", "Rather than learning new control policies for each new task, it is possible, when tasks share some structure, to compose a \"meta-policy\" from previously learned policies. This paper reports results from experiments using Deep Reinforcement Learning on a continuous-state, discrete-action autonomous driving simulator. We explore how Deep Neural Networks can represent meta-policies that switch among a set of previously learned policies, specifically in settings where the dynamics of a new scenario are composed of a mixture of previously learned dynamics and where the state observation is possibly corrupted by sensing noise. We also report the results of experiments varying dynamics mixes, distractor policies, magnitudes distributions of sensing noise, and obstacles. In a fully observed experiment, the meta-policy learning algorithm achieves 2.6x the reward achieved by the next best policy composition technique with 80 less exploration. In a partially observed experiment, the meta-policy learning algorithm converges after 50 iterations while a direct application of RL fails to converge even after 200 iterations.", "Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm. We provide a novel large-scale dataset of crowd-sourced driving behavior suitable for training our model, and report results predicting the driver action on held out sequences across diverse conditions.", "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles." ] }
1812.07760
2951129992
A safe and robust on-road navigation system is a crucial component of achieving fully automated vehicles. NVIDIA recently proposed an End-to-End algorithm that can directly learn steering commands from raw pixels of a front camera by using one convolutional neural network. In this paper, we leverage auxiliary information aside from raw images and design a novel network structure, called Auxiliary Task Network (ATN), to help boost the driving performance while maintaining the advantage of minimal training data and an End-to-End training method. In this network, we introduce human prior knowledge into vehicle navigation by transferring features from image recognition tasks. Image semantic segmentation is applied as an auxiliary task for navigation. We consider temporal information by introducing an LSTM module and optical flow to the network. Finally, we combine vehicle kinematics with a sensor fusion step. We discuss the benefits of our method over state-of-the-art visual navigation methods both in the Udacity simulation environment and on the real-world Comma.ai dataset.
In this paper, we decide to improve learning driving policies by adding auxiliary tasks. The idea of using auxiliary tasks to aid the nominal task is not unprecedented. @cite_22 trained a reinforcement learning algorithm with unsupervised auxiliary tasks. By forecasting pixel changes and predicting rewards, the reinforcement learning algorithm converges faster and to a higher reward. The concept of transfer learning @cite_4 can also be regarded as training the nominal task with auxiliary tasks. The benefit of training with auxiliary tasks can be faster convergence time and better performance.
{ "cite_N": [ "@cite_4", "@cite_22" ], "mid": [ "2165698076", "2950872548" ], "abstract": [ "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth." ] }
1812.07807
2951825115
Past years have witnessed rapid developments in Neural Machine Translation (NMT). Most recently, with advanced modeling and training techniques, the RNN-based NMT (RNMT) has shown its potential strength, even compared with the well-known Transformer (self-attentional) model. Although the RNMT model can possess very deep architectures through stacking layers, the transition depth between consecutive hidden states along the sequential axis is still shallow. In this paper, we further enhance the RNN-based NMT through increasing the transition depth between consecutive hidden states and build a novel Deep Transition RNN-based Architecture for Neural Machine Translation, named DTMT. This model enhances the hidden-to-hidden transition with multiple non-linear transformations, as well as maintains a linear transformation path throughout this deep transition by the well-designed linear transformation mechanism to alleviate the gradient vanishing problem. Experiments show that with the specially designed deep transition modules, our DTMT can achieve remarkable improvements on translation quality. Experimental results on Chinese->English translation task show that DTMT can outperform the Transformer model by +2.09 BLEU points and achieve the best results ever reported in the same dataset. On WMT14 English->German and English->French translation tasks, DTMT shows superior quality to the state-of-the-art NMT systems, including the Transformer and the RNMT+.
Our work is inspired by the deep transition RNN @cite_14 , which is applied on language modeling task. barone2017deep fist apply this kind of architecture on NMT, while there is still a large margin between this transition model and the state-of-the-art NMT models. Different from these works, we extremely enhance the deep transition architecture and build the state-of-the-art deep transition NMT model from three aspects: 1) fusing L-GRU and T-GRUs, to provide a linear transformation path between consecutive hidden states, as well as preserving the non-linear transformation path; 2) exploiting three deep transition modules, including the , the and the ; and 3) investigating and combing recent advanced techniques, including multi-head attention, labeling smoothing, layer normalization, dropout on multi-layers and positional encoding.
{ "cite_N": [ "@cite_14" ], "mid": [ "2964335273" ], "abstract": [ "Abstract: In this paper, we explore different ways to extend a recurrent neural network (RNN) to a RNN. We start by arguing that the concept of depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, however, we find three points of an RNN which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an alternative interpretation of these deep RNNs using a novel framework based on neural operators. The proposed deep RNNs are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNNs benefit from the depth and outperform the conventional, shallow RNNs." ] }
1812.07807
2951825115
Past years have witnessed rapid developments in Neural Machine Translation (NMT). Most recently, with advanced modeling and training techniques, the RNN-based NMT (RNMT) has shown its potential strength, even compared with the well-known Transformer (self-attentional) model. Although the RNMT model can possess very deep architectures through stacking layers, the transition depth between consecutive hidden states along the sequential axis is still shallow. In this paper, we further enhance the RNN-based NMT through increasing the transition depth between consecutive hidden states and build a novel Deep Transition RNN-based Architecture for Neural Machine Translation, named DTMT. This model enhances the hidden-to-hidden transition with multiple non-linear transformations, as well as maintains a linear transformation path throughout this deep transition by the well-designed linear transformation mechanism to alleviate the gradient vanishing problem. Experiments show that with the specially designed deep transition modules, our DTMT can achieve remarkable improvements on translation quality. Experimental results on Chinese->English translation task show that DTMT can outperform the Transformer model by +2.09 BLEU points and achieve the best results ever reported in the same dataset. On WMT14 English->German and English->French translation tasks, DTMT shows superior quality to the state-of-the-art NMT systems, including the Transformer and the RNMT+.
Our work is also inspired by deep stacked RNN models for NMT @cite_5 @cite_4 @cite_12 . ZhouCWLX16 propose fast-forward connections to address the notorious problem of vanishing exploding gradients for deep stacked RNMT. wangEtAl2017 propose the Linear Associative Unit (LAU) to reduce the gradient path inside the recurrent units. Different from these studies, we focus on the deep transition architecture and propose a novel linear transformation enhanced GRU (L-GRU) for our deep transition RNMT. L-GRU is verified more effective than the LAU, although L-GRU exploits more concise operations with the same parameter quantity to incorporate the linear transformation of the embedding input. Inspired by RNMT+ @cite_12 , we investigate and combine generally applicable training and optimization techniques, and finally enable our to achieve superior quality to state-of-the-art NMT systems.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_12" ], "mid": [ "2963991316", "2963599677", "2896060389" ], "abstract": [ "Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.", "", "The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then out-performed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English to French and English to German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets." ] }
1812.07776
2905503814
This paper considers the problem of optimum reconstruction in generalized sampling-reconstruction processes (GSRPs). We propose constrained GSRP, a novel framework that minimizes the reconstruction error for inputs in a subspace, subject to a constraint on the maximum regret-error for any other signal in the entire signal space. This framework addresses the primary limitation of existing GSRPs (consistent, subspace and minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulate constrained GSRP as a constrained optimization problem, the solution to which turns out to be a convex combination of the subspace and the minimax regret samplings. Detailed theoretical analysis on the reconstruction error shows that constrained sampling achieves a reconstruction that is 1) (sub)optimal for signals in the input subspace, 2) robust for signals around the input subspace, and 3) reasonably bounded for any other signals with a simple choice of the constraint parameter. Experimental results on sampling-reconstruction of a Gaussian input and a speech signal demonstrate the effectiveness of the proposed scheme.
Similarly, let @math be spanned by vectors @math . Then @math can be described by the adjoint (analysis) operator [ S^* x S^* x = c, c[n]= x ,s_n , n , , x ] since by definition of adjoint operator @cite_0 [ S a, x = a, S^* x for all x , , a ( ). ] Note that the nullspace of @math is the orthogonal complement of @math , i.e., @math (see @cite_0 ).
{ "cite_N": [ "@cite_0" ], "mid": [ "2107221565" ], "abstract": [ "Metric Spaces. Normed Spaces Banach Spaces. Inner Product Spaces Hilbert Spaces. Fundamental Theorems for Normed and Banach Spaces. Further Applications: Banach Fixed Point Theorem. Spectral Theory of Linear Operators in Normed Spaces. Compact Linear Operators on Normed Spaces and Their Spectrum. Spectral Theory of Bounded Self--Adjoint Linear Operators. Unbounded Linear Operators in Hilbert Space. Unbounded Linear Operators in Quantum Mechanics. Appendices. References. Index." ] }
1812.07776
2905503814
This paper considers the problem of optimum reconstruction in generalized sampling-reconstruction processes (GSRPs). We propose constrained GSRP, a novel framework that minimizes the reconstruction error for inputs in a subspace, subject to a constraint on the maximum regret-error for any other signal in the entire signal space. This framework addresses the primary limitation of existing GSRPs (consistent, subspace and minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulate constrained GSRP as a constrained optimization problem, the solution to which turns out to be a convex combination of the subspace and the minimax regret samplings. Detailed theoretical analysis on the reconstruction error shows that constrained sampling achieves a reconstruction that is 1) (sub)optimal for signals in the input subspace, 2) robust for signals around the input subspace, and 3) reasonably bounded for any other signals with a simple choice of the constraint parameter. Experimental results on sampling-reconstruction of a Gaussian input and a speech signal demonstrate the effectiveness of the proposed scheme.
Under the frame assumption of @math , @math can be represented in terms of analysis and synthesis operators as @cite_17 where " @math " denotes the Moore-Penrose pseudoinverse.
{ "cite_N": [ "@cite_17" ], "mid": [ "2147722152" ], "abstract": [ "We treat the problem of reconstructing a signal from its nonideal samples where the sampling and reconstruction spaces as well as the class of input signals can be arbitrary subspaces of a Hilbert space. Our formulation is general, and includes as special cases reconstruction from finitely many samples as well as uniform-sampling of continuous-time signals, which are not necessarily bandlimited. To obtain a good approximation of the signal in the reconstruction space from its samples, we suggest two design strategies that attempt to minimize the squared-norm error between the signal and its reconstruction. The approaches we propose differ in their assumptions on the input signal: If the signal is known to lie in an appropriately chosen subspace, then we propose a method that achieves the minimal squared error. On the other hand, when the signal is not restricted, we show that the minimal-norm reconstruction cannot generally be obtained. Instead, we suggest minimizing the worst-case squared error between the reconstructed signal, and the best possible (but usually unattainable) approximation of the signal within the reconstruction space. We demonstrate both theoretically and through simulations that the suggested methods can outperform the consistent reconstruction approach previously proposed for this problem." ] }
1812.07776
2905503814
This paper considers the problem of optimum reconstruction in generalized sampling-reconstruction processes (GSRPs). We propose constrained GSRP, a novel framework that minimizes the reconstruction error for inputs in a subspace, subject to a constraint on the maximum regret-error for any other signal in the entire signal space. This framework addresses the primary limitation of existing GSRPs (consistent, subspace and minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulate constrained GSRP as a constrained optimization problem, the solution to which turns out to be a convex combination of the subspace and the minimax regret samplings. Detailed theoretical analysis on the reconstruction error shows that constrained sampling achieves a reconstruction that is 1) (sub)optimal for signals in the input subspace, 2) robust for signals around the input subspace, and 3) reasonably bounded for any other signals with a simple choice of the constraint parameter. Experimental results on sampling-reconstruction of a Gaussian input and a speech signal demonstrate the effectiveness of the proposed scheme.
According to @cite_17 , the orthogonal projection @math is subject to a fundamental limitation on the GSRP: Unless the reconstruction subspace is a subset of the sampling subspace, i.e., there exists no correction filter @math that renders the GSRP @math to be the orthogonal projection @math .
{ "cite_N": [ "@cite_17" ], "mid": [ "2147722152" ], "abstract": [ "We treat the problem of reconstructing a signal from its nonideal samples where the sampling and reconstruction spaces as well as the class of input signals can be arbitrary subspaces of a Hilbert space. Our formulation is general, and includes as special cases reconstruction from finitely many samples as well as uniform-sampling of continuous-time signals, which are not necessarily bandlimited. To obtain a good approximation of the signal in the reconstruction space from its samples, we suggest two design strategies that attempt to minimize the squared-norm error between the signal and its reconstruction. The approaches we propose differ in their assumptions on the input signal: If the signal is known to lie in an appropriately chosen subspace, then we propose a method that achieves the minimal squared error. On the other hand, when the signal is not restricted, we show that the minimal-norm reconstruction cannot generally be obtained. Instead, we suggest minimizing the worst-case squared error between the reconstructed signal, and the best possible (but usually unattainable) approximation of the signal within the reconstruction space. We demonstrate both theoretically and through simulations that the suggested methods can outperform the consistent reconstruction approach previously proposed for this problem." ] }
1812.07776
2905503814
This paper considers the problem of optimum reconstruction in generalized sampling-reconstruction processes (GSRPs). We propose constrained GSRP, a novel framework that minimizes the reconstruction error for inputs in a subspace, subject to a constraint on the maximum regret-error for any other signal in the entire signal space. This framework addresses the primary limitation of existing GSRPs (consistent, subspace and minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulate constrained GSRP as a constrained optimization problem, the solution to which turns out to be a convex combination of the subspace and the minimax regret samplings. Detailed theoretical analysis on the reconstruction error shows that constrained sampling achieves a reconstruction that is 1) (sub)optimal for signals in the input subspace, 2) robust for signals around the input subspace, and 3) reasonably bounded for any other signals with a simple choice of the constraint parameter. Experimental results on sampling-reconstruction of a Gaussian input and a speech signal demonstrate the effectiveness of the proposed scheme.
Acknowledging the optimality as well as the limitation of the orthogonal projection, we now introduce the difference between the GSRP @math and @math , which is, in the spirit of @cite_17 , referred to as the regret-error system: And the regret-error signal is given as [ R x = P_ x-x_r = ( P_ -W Q S^* )x. ]
{ "cite_N": [ "@cite_17" ], "mid": [ "2147722152" ], "abstract": [ "We treat the problem of reconstructing a signal from its nonideal samples where the sampling and reconstruction spaces as well as the class of input signals can be arbitrary subspaces of a Hilbert space. Our formulation is general, and includes as special cases reconstruction from finitely many samples as well as uniform-sampling of continuous-time signals, which are not necessarily bandlimited. To obtain a good approximation of the signal in the reconstruction space from its samples, we suggest two design strategies that attempt to minimize the squared-norm error between the signal and its reconstruction. The approaches we propose differ in their assumptions on the input signal: If the signal is known to lie in an appropriately chosen subspace, then we propose a method that achieves the minimal squared error. On the other hand, when the signal is not restricted, we show that the minimal-norm reconstruction cannot generally be obtained. Instead, we suggest minimizing the worst-case squared error between the reconstructed signal, and the best possible (but usually unattainable) approximation of the signal within the reconstruction space. We demonstrate both theoretically and through simulations that the suggested methods can outperform the consistent reconstruction approach previously proposed for this problem." ] }
1812.07776
2905503814
This paper considers the problem of optimum reconstruction in generalized sampling-reconstruction processes (GSRPs). We propose constrained GSRP, a novel framework that minimizes the reconstruction error for inputs in a subspace, subject to a constraint on the maximum regret-error for any other signal in the entire signal space. This framework addresses the primary limitation of existing GSRPs (consistent, subspace and minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulate constrained GSRP as a constrained optimization problem, the solution to which turns out to be a convex combination of the subspace and the minimax regret samplings. Detailed theoretical analysis on the reconstruction error shows that constrained sampling achieves a reconstruction that is 1) (sub)optimal for signals in the input subspace, 2) robust for signals around the input subspace, and 3) reasonably bounded for any other signals with a simple choice of the constraint parameter. Experimental results on sampling-reconstruction of a Gaussian input and a speech signal demonstrate the effectiveness of the proposed scheme.
Assume that the following direct-sum condition holds Then, the correction filter provides an error-free reconstruction for input signals in @math @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2013180153" ], "abstract": [ "This article introduces a general framework for sampling and reconstruction pro- cedures based on a consistency requirement, introduced by Unser and Aldroubi in (29). The procedures we develop allow for almost arbitrary sampling and reconstruction spaces, as well as arbitrary input signals. We first derive a nonredundant sampling procedure. We then introduce the new concept of oblique dual frame vectors, that lead to frame expansions in which the analysis and synthesis frame vectors are not constrained to lie in the same space. Based on this notion, we develop a redundant sampling procedure that can be used to reduce the quantization error when quantizing the measurements prior to reconstruction." ] }
1812.07776
2905503814
This paper considers the problem of optimum reconstruction in generalized sampling-reconstruction processes (GSRPs). We propose constrained GSRP, a novel framework that minimizes the reconstruction error for inputs in a subspace, subject to a constraint on the maximum regret-error for any other signal in the entire signal space. This framework addresses the primary limitation of existing GSRPs (consistent, subspace and minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulate constrained GSRP as a constrained optimization problem, the solution to which turns out to be a convex combination of the subspace and the minimax regret samplings. Detailed theoretical analysis on the reconstruction error shows that constrained sampling achieves a reconstruction that is 1) (sub)optimal for signals in the input subspace, 2) robust for signals around the input subspace, and 3) reasonably bounded for any other signals with a simple choice of the constraint parameter. Experimental results on sampling-reconstruction of a Gaussian input and a speech signal demonstrate the effectiveness of the proposed scheme.
The absolute error for each input can be derived as follows: [ |E_ x |^2 = |P_ ^ ,x |^2 + |P_ P_ ^ ,x |^2, x . ] And the regret-error is [ | R_ x | = |P_ P_ ^ ,x |, x . ] From @cite_17 , the absolute error can be bounded in terms of the subspace angles as The regret-error is shown in to be bounded as
{ "cite_N": [ "@cite_17" ], "mid": [ "2147722152" ], "abstract": [ "We treat the problem of reconstructing a signal from its nonideal samples where the sampling and reconstruction spaces as well as the class of input signals can be arbitrary subspaces of a Hilbert space. Our formulation is general, and includes as special cases reconstruction from finitely many samples as well as uniform-sampling of continuous-time signals, which are not necessarily bandlimited. To obtain a good approximation of the signal in the reconstruction space from its samples, we suggest two design strategies that attempt to minimize the squared-norm error between the signal and its reconstruction. The approaches we propose differ in their assumptions on the input signal: If the signal is known to lie in an appropriately chosen subspace, then we propose a method that achieves the minimal squared error. On the other hand, when the signal is not restricted, we show that the minimal-norm reconstruction cannot generally be obtained. Instead, we suggest minimizing the worst-case squared error between the reconstructed signal, and the best possible (but usually unattainable) approximation of the signal within the reconstruction space. We demonstrate both theoretically and through simulations that the suggested methods can outperform the consistent reconstruction approach previously proposed for this problem." ] }
1812.07776
2905503814
This paper considers the problem of optimum reconstruction in generalized sampling-reconstruction processes (GSRPs). We propose constrained GSRP, a novel framework that minimizes the reconstruction error for inputs in a subspace, subject to a constraint on the maximum regret-error for any other signal in the entire signal space. This framework addresses the primary limitation of existing GSRPs (consistent, subspace and minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulate constrained GSRP as a constrained optimization problem, the solution to which turns out to be a convex combination of the subspace and the minimax regret samplings. Detailed theoretical analysis on the reconstruction error shows that constrained sampling achieves a reconstruction that is 1) (sub)optimal for signals in the input subspace, 2) robust for signals around the input subspace, and 3) reasonably bounded for any other signals with a simple choice of the constraint parameter. Experimental results on sampling-reconstruction of a Gaussian input and a speech signal demonstrate the effectiveness of the proposed scheme.
Recall that, filter @math is the minimizer of the reconstruction error for input @math , since it is the solution to the following optimization problem @cite_17 : where with @math representing the given sample sequence of input signal @math , and scalar @math being used as a bound of @math so that the objective function in problem ) is bounded.
{ "cite_N": [ "@cite_17" ], "mid": [ "2147722152" ], "abstract": [ "We treat the problem of reconstructing a signal from its nonideal samples where the sampling and reconstruction spaces as well as the class of input signals can be arbitrary subspaces of a Hilbert space. Our formulation is general, and includes as special cases reconstruction from finitely many samples as well as uniform-sampling of continuous-time signals, which are not necessarily bandlimited. To obtain a good approximation of the signal in the reconstruction space from its samples, we suggest two design strategies that attempt to minimize the squared-norm error between the signal and its reconstruction. The approaches we propose differ in their assumptions on the input signal: If the signal is known to lie in an appropriately chosen subspace, then we propose a method that achieves the minimal squared error. On the other hand, when the signal is not restricted, we show that the minimal-norm reconstruction cannot generally be obtained. Instead, we suggest minimizing the worst-case squared error between the reconstructed signal, and the best possible (but usually unattainable) approximation of the signal within the reconstruction space. We demonstrate both theoretically and through simulations that the suggested methods can outperform the consistent reconstruction approach previously proposed for this problem." ] }
1812.07776
2905503814
This paper considers the problem of optimum reconstruction in generalized sampling-reconstruction processes (GSRPs). We propose constrained GSRP, a novel framework that minimizes the reconstruction error for inputs in a subspace, subject to a constraint on the maximum regret-error for any other signal in the entire signal space. This framework addresses the primary limitation of existing GSRPs (consistent, subspace and minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulate constrained GSRP as a constrained optimization problem, the solution to which turns out to be a convex combination of the subspace and the minimax regret samplings. Detailed theoretical analysis on the reconstruction error shows that constrained sampling achieves a reconstruction that is 1) (sub)optimal for signals in the input subspace, 2) robust for signals around the input subspace, and 3) reasonably bounded for any other signals with a simple choice of the constraint parameter. Experimental results on sampling-reconstruction of a Gaussian input and a speech signal demonstrate the effectiveness of the proposed scheme.
Introduced in @cite_17 , the minimax regret sampling alleviates the drawback of large error associated with the consistent and subspace samplings. This is achieved by minimizing the maximum regret-error rather than the absolute error.
{ "cite_N": [ "@cite_17" ], "mid": [ "2147722152" ], "abstract": [ "We treat the problem of reconstructing a signal from its nonideal samples where the sampling and reconstruction spaces as well as the class of input signals can be arbitrary subspaces of a Hilbert space. Our formulation is general, and includes as special cases reconstruction from finitely many samples as well as uniform-sampling of continuous-time signals, which are not necessarily bandlimited. To obtain a good approximation of the signal in the reconstruction space from its samples, we suggest two design strategies that attempt to minimize the squared-norm error between the signal and its reconstruction. The approaches we propose differ in their assumptions on the input signal: If the signal is known to lie in an appropriately chosen subspace, then we propose a method that achieves the minimal squared error. On the other hand, when the signal is not restricted, we show that the minimal-norm reconstruction cannot generally be obtained. Instead, we suggest minimizing the worst-case squared error between the reconstructed signal, and the best possible (but usually unattainable) approximation of the signal within the reconstruction space. We demonstrate both theoretically and through simulations that the suggested methods can outperform the consistent reconstruction approach previously proposed for this problem." ] }
1812.07776
2905503814
This paper considers the problem of optimum reconstruction in generalized sampling-reconstruction processes (GSRPs). We propose constrained GSRP, a novel framework that minimizes the reconstruction error for inputs in a subspace, subject to a constraint on the maximum regret-error for any other signal in the entire signal space. This framework addresses the primary limitation of existing GSRPs (consistent, subspace and minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulate constrained GSRP as a constrained optimization problem, the solution to which turns out to be a convex combination of the subspace and the minimax regret samplings. Detailed theoretical analysis on the reconstruction error shows that constrained sampling achieves a reconstruction that is 1) (sub)optimal for signals in the input subspace, 2) robust for signals around the input subspace, and 3) reasonably bounded for any other signals with a simple choice of the constraint parameter. Experimental results on sampling-reconstruction of a Gaussian input and a speech signal demonstrate the effectiveness of the proposed scheme.
Consider the optimization problem: where Solution to ) is found to be Consequently, the GSRP becomes the product of two orthogonal projections Hence, the regret-error system is And the error system is Moreover, the regret-error is shown in @cite_17 to be bounded as Clearly, And since [ |E_ x |^2 ( 1+ ^2( , ) ) |P_ ^ x |^2. ] The above error estimates imply that @math results in good reconstruction for @math , at the cost of introducing error for @math (or @math ). Since it does not differentiate any input signals, it could be very conservative for signals in the input subspace.
{ "cite_N": [ "@cite_17" ], "mid": [ "2147722152" ], "abstract": [ "We treat the problem of reconstructing a signal from its nonideal samples where the sampling and reconstruction spaces as well as the class of input signals can be arbitrary subspaces of a Hilbert space. Our formulation is general, and includes as special cases reconstruction from finitely many samples as well as uniform-sampling of continuous-time signals, which are not necessarily bandlimited. To obtain a good approximation of the signal in the reconstruction space from its samples, we suggest two design strategies that attempt to minimize the squared-norm error between the signal and its reconstruction. The approaches we propose differ in their assumptions on the input signal: If the signal is known to lie in an appropriately chosen subspace, then we propose a method that achieves the minimal squared error. On the other hand, when the signal is not restricted, we show that the minimal-norm reconstruction cannot generally be obtained. Instead, we suggest minimizing the worst-case squared error between the reconstructed signal, and the best possible (but usually unattainable) approximation of the signal within the reconstruction space. We demonstrate both theoretically and through simulations that the suggested methods can outperform the consistent reconstruction approach previously proposed for this problem." ] }
1812.07738
2904012376
We study the risk performance of distributed learning for the regularization empirical risk minimization with fast convergence rate, substantially improving the error analysis of the existing divide-and-conquer based distributed learning. An interesting theoretical finding is that the larger the diversity of each local estimate is, the tighter the risk bound is. This theoretical analysis motivates us to devise an effective maxdiversity distributed learning algorithm (MDD). Experimental results show that MDD can outperform the existing divide-andconquer methods but with a bit more time. Theoretical analysis and empirical results demonstrate that our proposed MDD is sound and effective.
In this subsection, we will compare our bound with the related work @cite_9 @cite_6 @cite_15 . Under the smooth, strongly convex and other some assumptions, a distributed risk bound is given in @cite_9 : Under some eigenfunction assumption, the error analysis for distributed regularized least squares were established in @cite_6 , By removing the eigenfunction assumptions with a novel integral operator method of @cite_6 , a new bound was derived @cite_15 : Note that, if @math is @math -Lipschitz continuous over @math , that is we can obtain that Thus, the order of @cite_6 @cite_15 @cite_9 of @math is at most @math According to the subsections and , if @math is not very large, and @math is small, the order of this paper can even faster than @math , which is much faster than those of in the related work @cite_9 @cite_6 @cite_15 .
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_15" ], "mid": [ "2571425027", "199271301", "2963020641" ], "abstract": [ "We study two communication-efficient algorithms for distributed statistical optimization on large-scale data. The first algorithm is an averaging method that distributes the N data samples evenly to m machines, performs separate minimization on each subset, and then averages the estimates. We provide a sharp analysis of this average mixture algorithm, showing that under a reasonable set of conditions, the combined parameter achieves mean-squared error that decays as O(N-1 + (N m)-2). Whenever m ≤ √N, this guarantee matches the best possible rate achievable by a centralized algorithm having access to all N samples. The second algorithm is a novel method, based on an appropriate form of the bootstrap. Requiring only a single round of communication, it has mean-squared error that decays as O(N-1 + (N m)-3), and so is more robust to the amount of parallelization. We complement our theoretical results with experiments on large-scale problems from the internet search domain. In particular, we show that our methods efficiently solve an advertisement prediction problem from the Chinese SoSo Search Engine, which consists of N ≈ 2.4 × 108 samples and d ≥ 700,000 dimensions.", "We study a decomposition-based scalable approach to performing kernel ridge regression. The method is simple to describe: it randomly partitions a dataset of size N into m subsets of equal size, computes an independent kernel ridge regression estimator for each subset, then averages the local solutions into a global predictor. This partitioning leads to a substantial reduction in computation time versus the standard approach of performing kernel ridge regression on all N samples. Our main theorem establishes that despite the computational speed-up, statistical optimality is retained: if m is not too large, the partition-based estimate achieves optimal rates of convergence for the full sample size N. As concrete examples, our theory guarantees that m may grow polynomially in N for Sobolev spaces, and nearly linearly for finite-rank kernels and Gaussian kernels. We conclude with simulations complementing our theoretical results and exhibiting the computational and statistical benefits of our approach.", "" ] }
1812.07627
2905076505
The standard loss function used to train neural network classifiers, categorical cross-entropy (CCE), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective. In this work, we propose clustering-oriented representation learning (COREL) as an alternative to CCE in the context of a generalized attractive-repulsive loss framework. COREL has the consequence of building latent representations that collectively exhibit the quality of natural clustering within the latent space of the final hidden layer, according to a predefined similarity function. Despite being simple to implement, COREL variants outperform or perform equivalently to CCE in a variety of scenarios, including image and news article classification using both feed-forward and convolutional neural networks. Analysis of the latent spaces created with different similarity functions facilitates insights on the different use cases COREL variants can satisfy, where the Cosine-COREL variant makes a consistently clusterable latent space, while Gaussian-COREL consistently obtains better classification accuracy than CCE.
Recently there have been many approaches using either cosine- or Gaussian-based loss functions. Most of these are used explicitly for the domain of image classification, where the problem of needing discriminative features is only understood as necessary for image problems, particularly facial recognition @cite_19 @cite_2 @cite_21 @cite_12 @cite_7 . Some recent work has combined cosine similarity with weight imprinting @cite_24 , which sets @math for @math to be dynamically computed latent centroids (as in center loss @cite_1 ); they then apply the softmax operation over the cosine similarities, as in congenerous cosine loss @cite_29 . Other work models image classes as Gaussians @cite_16 , but requires learning the covariance matrix @math in the similarity function, which is constrained to be diagonal.
{ "cite_N": [ "@cite_7", "@cite_29", "@cite_21", "@cite_1", "@cite_24", "@cite_19", "@cite_2", "@cite_16", "@cite_12" ], "mid": [ "2790592560", "2594088761", "2784294927", "2520774990", "2796346823", "", "2600537992", "", "2786817236" ], "abstract": [ "We motivate and present Ring loss, a simple and elegant feature normalization approach for deep networks designed to augment standard loss functions such as Softmax. We argue that deep feature normalization is an important aspect of supervised classification problems where we require the model to represent each class in a multi-class problem equally well. The direct approach to feature normalization through the hard normalization operation results in a non-convex formulation. Instead, Ring loss applies soft normalization, where it gradually learns to constrain the norm to the scaled unit circle while preserving convexity leading to more robust features. We apply Ring loss to large-scale face recognition problems and present results on LFW, the challenging protocols of IJB-A Janus, Janus CS3 (a superset of IJB-A Janus), Celebrity Frontal-Profile (CFP) and MegaFace with 1 million distractors. Ring loss outperforms strong baselines, matches state-of-the-art performance on IJB-A Janus and outperforms all other results on the challenging Janus CS3 thereby achieving state-of-the-art. We also outperform strong baselines in handling extremely low resolution face matching.", "Person recognition aims at recognizing the same identity across time and space with complicated scenes and similar appearance. In this paper, we propose a novel method to address this task by training a network to obtain robust and representative features. The intuition is that we directly compare and optimize the cosine distance between two features - enlarging inter-class distinction as well as alleviating inner-class variance. We propose a congenerous cosine loss by minimizing the cosine distance between samples and their cluster centroid in a cooperative way. Such a design reduces the complexity and could be implemented via softmax with normalized inputs. Our method also differs from previous work in person recognition that we do not conduct a second training on the test subset. The identity of a person is determined by measuring the similarity from several body regions in the reference set. Experimental results show that the proposed approach achieves better classification accuracy against previous state-of-the-arts.", "", "Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks.", "Human vision is able to immediately recognize novel visual categories after seeing just one or a few training examples. We describe how to add a similar capability to ConvNet classifiers by directly setting the final layer weights from novel training examples during low-shot learning. We call this process weight imprinting as it directly sets weights for a new category based on an appropriately scaled copy of the embedding layer activations for that training example. The imprinting process provides a valuable complement to training with stochastic gradient descent, as it provides immediate good classification performance and an initialization for any further fine-tuning in the future. We show how this imprinting process is related to proxy-based embeddings. However, it differs in that only a single imprinted weight vector is learned for each novel category, rather than relying on a nearest-neighbor distance to training instances as typically used with embedding methods. Our experiments show that using averaging of imprinted weights provides better generalization than using nearest-neighbor instance embeddings.", "", "In recent years, the performance of face verification systems has significantly improved using deep convolutional neural networks (DCNNs). A typical pipeline for face verification includes training a deep network for subject classification with softmax loss, using the penultimate layer output as the feature descriptor, and generating a cosine similarity score given a pair of face images. The softmax loss function does not optimize the features to have higher similarity score for positive pairs and lower similarity score for negative pairs, which leads to a performance gap. In this paper, we add an L2-constraint to the feature descriptors which restricts them to lie on a hypersphere of a fixed radius. This module can be easily implemented using existing deep learning frameworks. We show that integrating this simple step in the training pipeline significantly boosts the performance of face verification. Specifically, we achieve state-of-the-art results on the challenging IJB-A dataset, achieving True Accept Rate of 0.909 at False Accept Rate 0.0001 on the face verification protocol. Additionally, we achieve state-of-the-art performance on LFW dataset with an accuracy of 99.78 , and competing performance on YTF dataset with accuracy of 96.08 .", "", "Face recognition has achieved revolutionary advancement owing to the advancement of the deep convolutional neural network (CNN). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, traditional softmax loss of deep CNN usually lacks the power of discrimination. To address this problem, recently several loss functions such as central loss centerloss , large margin softmax loss lsoftmax , and angular softmax loss sphereface have been proposed. All these improvement algorithms share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we design a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as cosine loss by L2 normalizing both features and weight vectors to remove radial variation, based on which a cosine margin term is introduced to further maximize decision margin in angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. To test our approach, extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmark experiments, which confirms the effectiveness of our approach." ] }
1812.07627
2905076505
The standard loss function used to train neural network classifiers, categorical cross-entropy (CCE), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective. In this work, we propose clustering-oriented representation learning (COREL) as an alternative to CCE in the context of a generalized attractive-repulsive loss framework. COREL has the consequence of building latent representations that collectively exhibit the quality of natural clustering within the latent space of the final hidden layer, according to a predefined similarity function. Despite being simple to implement, COREL variants outperform or perform equivalently to CCE in a variety of scenarios, including image and news article classification using both feed-forward and convolutional neural networks. Analysis of the latent spaces created with different similarity functions facilitates insights on the different use cases COREL variants can satisfy, where the Cosine-COREL variant makes a consistently clusterable latent space, while Gaussian-COREL consistently obtains better classification accuracy than CCE.
In natural language processing, cosine similarity-based losses have only begun to be explored for the purpose of constructing more meaningful representations. In one case for the purpose of linearly constructing antonymous word embeddings @cite_26 , and also in a deep transfer learning task of building clusterable event representations for event coreference resolution @cite_27 .
{ "cite_N": [ "@cite_27", "@cite_26" ], "mid": [ "2803395921", "2250539671" ], "abstract": [ "We present an approach to event coreference resolution by developing a general framework for clustering that uses supervised representation learning. We propose a neural network architecture with novel Clustering-Oriented Regularization (CORE) terms in the objective function. These terms encourage the model to create embeddings of event mentions that are amenable to clustering. We then use agglomerative clustering on these embeddings to build event coreference chains. For both within- and cross-document coreference on the ECB+ corpus, our model obtains better results than models that require significantly more pre-annotated information. This work provides insight and motivating results for a new general approach to solving coreference and clustering problems with representation learning.", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition." ] }
1812.07627
2905076505
The standard loss function used to train neural network classifiers, categorical cross-entropy (CCE), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective. In this work, we propose clustering-oriented representation learning (COREL) as an alternative to CCE in the context of a generalized attractive-repulsive loss framework. COREL has the consequence of building latent representations that collectively exhibit the quality of natural clustering within the latent space of the final hidden layer, according to a predefined similarity function. Despite being simple to implement, COREL variants outperform or perform equivalently to CCE in a variety of scenarios, including image and news article classification using both feed-forward and convolutional neural networks. Analysis of the latent spaces created with different similarity functions facilitates insights on the different use cases COREL variants can satisfy, where the Cosine-COREL variant makes a consistently clusterable latent space, while Gaussian-COREL consistently obtains better classification accuracy than CCE.
Recent work @cite_20 has modelled classes with a set of Gaussians with the motivation of creating a well-structured latent space, using neighborhood-based sampling to maintain the centers of these Gaussians. However, this requires substantial architectural modifications to neural models, requiring frequent pauses during training to run a K-Means clustering algorithm over the latent space, becoming more costly as the training set size increases. Other work has designed similarly motivated loss functions, but also requires significantly more model engineering than COREL. This includes pairwise-based methods @cite_22 @cite_18 @cite_5 , and triplet-based methods @cite_28 @cite_14 @cite_13 ; all of these require sophisticated methods for sampling training data, necessitating more hyperparameters and architectural modifications in order to implement their methods.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_28", "@cite_5", "@cite_13", "@cite_20" ], "mid": [ "2792861137", "2096733369", "2205138079", "2113307832", "2769159728", "2792096654", "2270409809" ], "abstract": [ "Recently, there has been increasing interest to leverage the competence of neural networks to analyze data. In particular, new clustering methods that employ deep embeddings have been presented. In this paper, we depart from centroid-based models and suggest a new framework, called Clustering-driven deep embedding with PAirwise Constraints (CPAC), for non-parametric clustering using a neural network. We present a clustering-driven embedding based on a Siamese network that encourages pairs of data points to output similar representations in the latent space. Our pair-based model allows augmenting the information with labeled pairs to constitute a semi-supervised framework. Our approach is based on analyzing the losses associated with each pair to refine the set of constraints. We show that clustering performance increases when using this scheme, even with a limited amount of user queries. We demonstrate how our architecture is adapted for various types of data and present the first deep framework to cluster 3D shapes.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "This paper presents a neural network-based end-to-end clustering framework. We design a novel strategy to utilize the contrastive criteria for pushing data-forming clusters directly from raw data, in addition to learning a feature embedding suitable for such clustering. The network is trained with weak labels, specifically partial pairwise relationships between data instances. The cluster assignments and their probabilities are then obtained at the output layer by feed-forwarding the data. The framework has the interesting characteristic that no cluster centers need to be explicitly specified, thus the resulting cluster distribution is purely data-driven and no distance metrics need to be predefined. The experiments show that the proposed approach beats the conventional two-stage method (feature embedding with k-means) by a significant margin. It also compares favorably to the performance of the standard cross entropy loss for classification. Robustness analysis also shows that the method is largely insensitive to the number of clusters. Specifically, we show that the number of dominant clusters is close to the true number of clusters even when a large k is used for clustering.", "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not. This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement.", "Most existing 3D object recognition algorithms focus on leveraging the strong discriminative power of deep learning models with softmax loss for the classification of 3D data, while learning discriminative features with deep metric learning for 3D object retrieval is more or less neglected. In the paper, we study variants of deep metric learning losses for 3D object retrieval, which did not receive enough attention from this area. First , two kinds of representative losses, triplet loss and center loss, are introduced which could learn more discriminative features than traditional classification loss. Then, we propose a novel loss named triplet-center loss, which can further enhance the discriminative power of the features. The proposed triplet-center loss learns a center for each class and requires that the distances between samples and centers from the same class are closer than those from different classes. Extensive experimental results on two popular 3D object retrieval benchmarks and two widely-adopted sketch-based 3D shape retrieval benchmarks consistently demonstrate the effectiveness of our proposed loss, and significant improvements have been achieved compared with the state-of-the-arts.", "Distance metric learning (DML) approaches learn a transformation to a representation space where distance is in correspondence with a predefined notion of similarity. While such models offer a number of compelling benefits, it has been difficult for these to compete with modern classification algorithms in performance and even in feature extraction. In this work, we propose a novel approach explicitly designed to address a number of subtle yet important issues which have stymied earlier DML algorithms. It maintains an explicit model of the distributions of the different classes in representation space. It then employs this knowledge to adaptively assess similarity, and achieve local discrimination by penalizing class distribution overlap. We demonstrate the effectiveness of this idea on several tasks. Our approach achieves state-of-the-art classification results on a number of fine-grained visual recognition datasets, surpassing the standard softmax classifier and outperforming triplet loss by a relative margin of 30-40 . In terms of computational performance, it alleviates training inefficiencies in the traditional triplet loss, reaching the same error in 5-30 times fewer iterations. Beyond classification, we further validate the saliency of the learnt representations via their attribute concentration and hierarchy recovery properties, achieving 10-25 relative gains on the softmax classifier and 25-50 on triplet loss in these tasks." ] }
1812.07534
2903863706
In this article, we investigate the impact of information on networked control systems, and illustrate how to quantify a fundamental property of stochastic processes that can enrich our understanding about such systems. To that end, we develop a theoretical framework for the joint design of an event trigger and a controller in optimal event-triggered control. We cover two distinct information patterns: perfect information and imperfect information. In both cases, observations are available at the event trigger instantly, but are transmitted to the controller sporadically with one-step delay. For each information pattern, we characterize the optimal triggering policy and optimal control policy such that the corresponding policy profile represents a Nash equilibrium. Accordingly, we quantify the value of information @math as the variation in the cost-to-go of the system given an observation at time @math . Finally, we provide an algorithm for approximation of the value of information, and synthesize a closed-form suboptimal triggering policy with a performance guarantee that can readily be implemented.
A special class of event-triggered estimation and event-triggered control is sensor scheduling in which open-loop triggering policies are employed. Sensor scheduling can be traced back to the 1970s. However, recently Trimpe and D'Andrea @cite_25 and Leong @cite_8 adopted sensor scheduling for networked control systems, and obtained open-loop triggering policies in terms of the estimation error covariances. It is also worth mentioning that in a rather different setup from what we consider in this study, Antunes and Heemels @cite_27 considered a networked control system in which the event trigger and controller are both collocated with the sensor, and control inputs are to be transmitted to the process. They proposed an approximation algorithm, and showed that a performance improvement with respect to periodic control can be guaranteed. Our approximation algorithm is inspired by this idea. Nevertheless, herein unlike the above work, the event trigger and controller are distributed.
{ "cite_N": [ "@cite_27", "@cite_25", "@cite_8" ], "mid": [ "1968939752", "2011753154", "2791885538" ], "abstract": [ "Cyber-Physical Systems (CPSs) resulting from the interconnection of computational, communication, and control (cyber) devices with physical processes are wide spreading in our society. In several CPS applications it is crucial to minimize the communication burden, while still providing desirable closed-loop control properties. To this effect, a promising approach is to embrace the recently proposed event-triggered control paradigm, in which the transmission times are chosen based on well-defined events, using state information. However, few general event-triggered control methods guarantee closed-loop improvements over traditional periodic transmission strategies. Here, we provide a new class of event-triggered controllers for linear systems which guarantee better quadratic performance than traditional periodic time-triggered control using the same average transmission rate. In particular, our main results explicitly quantify the obtained performance improvements for quadratic average cost problems. The proposed controllers are inspired by rollout ideas in the context of dynamic programming.", "An event-based state estimation scenario is considered where a sensor sporadically transmits observations of a scalar linear process to a remote estimator. The remote estimator is a time-varying Kalman filter. The triggering decision is based on the estimation variance: the sensor runs a copy of the remote estimator and transmits a measurement if the associated measurement prediction variance exceeds a tolerable threshold. The resulting variance iteration is a new type of Riccati equation with switching that corresponds to the availability or unavailability of a measurement and depends on the variance at the previous step. We study asymptotic properties of the variance iteration and, in particular, asymptotic convergence to a periodic solution.", "This paper studies a remote state estimation problem where a sensor, equipped with energy harvesting capabilities, observes a dynamical process and transmits local state estimates over a packet dropping channel to a remote estimator. The objective is to decide, at every discrete time instant, whether the sensor should transmit or not, in order to minimize the expected estimation error covariance at the remote estimator over a finite horizon, subject to constraints on the sensor’s battery energy governed by an energy harvesting process. We establish structural results on the optimal scheduling which show that, for a given battery energy level and a given harvested energy, the optimal policy is a threshold policy on the error covariance. Similarly, for a given error covariance and a given harvested energy, the optimal policy is a threshold policy on the current battery level. An extension to the problem of transmission scheduling and control with an energy harvesting sensor is also considered." ] }
1812.07712
2905500782
Unsupervised video object segmentation is a crucial application in video analysis without knowing any prior information about the objects. It becomes tremendously challenging when multiple objects occur and interact in a given video clip. In this paper, a novel unsupervised video object segmentation approach via distractor-aware online adaptation (DOA) is proposed. DOA models spatial-temporal consistency in video sequences by capturing background dependencies from adjacent frames. Instance proposals are generated by the instance segmentation network for each frame and then selected by motion information as hard negatives if they exist and positives. To adopt high-quality hard negatives, the block matching algorithm is then applied to preceding frames to track the associated hard negatives. General negatives are also introduced in case that there are no hard negatives in the sequence and experiments demonstrate both kinds of negatives (distractors) are complementary. Finally, we conduct DOA using the positive, negative, and hard negative masks to update the foreground background segmentation. The proposed approach achieves state-of-the-art results on two benchmark datasets, DAVIS 2016 and FBMS-59 datasets.
Given the manual foreground background annotations for the first frame in a video clip, semi-supervised VOS methods segment the foreground object along the remaining frames. Deep learning based methods have achieved excellent performance @cite_9 @cite_60 @cite_58 @cite_14 @cite_30 @cite_4 , and static image segmentation @cite_27 @cite_49 @cite_50 @cite_18 @cite_23 is utilized to perform video object segmentation without any temporal information. MaskTrack @cite_49 considers the output of the previous frame as a guidance in the next frame to refine the mask. OSVOS caelles2017one processes each frame independently by finetuning on the first frame, and OSVOS-S @cite_50 further transfers instance-level semantic information learned on ImageNet @cite_17 to produce more accurate results. OnAVOS @cite_9 proposes online finetuning with the predicted frames to further optimize the inference network. To fully exploit the motion cues, MoNet @cite_30 introduces a distance transform layer to separate motion-inconstant objects and refine the segmentation results. However, under the circumstances which the object is occluded or changes the movement abruptly, significant performance deterioration will occur. Our approach aims to tackle this challenge using distractor-aware online adaptation.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_4", "@cite_60", "@cite_9", "@cite_27", "@cite_49", "@cite_50", "@cite_23", "@cite_58", "@cite_17" ], "mid": [ "2798441772", "2802962644", "2747668150", "", "", "2724418412", "1954128991", "2564998703", "2754124089", "2736477269", "2750515003", "2108598243" ], "abstract": [ "In this paper, we propose a novel MoNet model to deeply exploit motion cues for boosting video object segmentation performance from two aspects, i.e., frame representation learning and segmentation refinement. Concretely, MoNet exploits computed motion cue (i.e., optical flow) to reinforce the representation of the target frame by aligning and integrating representations from its neighbors. The new representation provides valuable temporal contexts for segmentation and improves robustness to various common contaminating factors, e.g., motion blur, appearance variation and deformation of video objects. Moreover, MoNet exploits motion inconsistency and transforms such motion cue into foreground background prior to eliminate distraction from confusing instances and noisy regions. By introducing a distance transform layer, MoNet can effectively separate motion-inconstant instances regions and thoroughly refine segmentation results. Integrating the proposed two motion exploitation components with a standard segmentation network, MoNet provides new state-of-the-art performance on three competitive benchmark datasets.", "With the development of Fully Convolutional Neural Network (FCN), there have been progressive advances in the field of semantic segmentation in recent years. The FCN-based solutions are able to summarize features across training images and generate matching templates for the desired object classes, yet they overlook intra-class difference (ICD) among multiple instances in the same class. In this work, we present a novel fine-to-coarse learning (FCL) procedure, which first guides the network with designed 'finer' sub-class labels, whose decisions are mapped to the original 'coarse' object category through end-to-end learning. A sub-class labeling strategy is designed with unsupervised clustering upon deep convolutional features, and the proposed FCL procedure enables a balance between the fine-scale (i.e. sub-class) and the coarse-scale (i.e. class) knowledge. We conduct extensive experiments on several popular datasets, including PASCAL VOC, Context, Person-Part and NYUDepth-v2 to demonstrate the advantage of learning finer sub-classes and the potential to guide the learning of deep networks with unsupervised clustering.", "We propose a novel video object segmentation algorithm based on pixel-level matching using Convolutional Neural Networks (CNN). Our network aims to distinguish the target area from the background on the basis of the pixel-level similarity between two object units. The proposed network represents a target object using features from different depth layers in order to take advantage of both the spatial details and the category-level semantic information. Furthermore, we propose a feature compression technique that drastically reduces the memory requirements while maintaining the capability of feature representation. Two-stage training (pre-training and fine-tuning) allows our network to handle any target object regardless of its category (even if the object's type does not belong to the pre-training data) or of variations in its appearance through a video sequence. Experiments on large datasets demonstrate the effectiveness of our model - against related methods - in terms of accuracy, speed, and stability. Finally, we introduce the transferability of our network to different domains, such as the infrared data domain.", "", "", "We tackle the task of semi-supervised video object segmentation, i.e. segmenting the pixels belonging to an object in the video using the ground truth pixel mask for the first frame. We build on the recently introduced one-shot video object segmentation (OSVOS) approach which uses a pretrained network and fine-tunes it on the first frame. While achieving impressive performance, at test time OSVOS uses the fine-tuned network in unchanged form and is not able to adapt to large changes in object appearance. To overcome this limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS) which updates the network online using training examples selected based on the confidence of the network and the spatial configuration. Additionally, we add a pretraining step based on objectness, which is learned on PASCAL. Our experiments show that both extensions are highly effective and improve the state of the art on DAVIS to an intersection-over-union score of 85.7 .", "We introduce an unsupervised, geodesic distance based, salient video object segmentation method. Unlike traditional methods, our method incorporates saliency as prior for object via the computation of robust geodesic measurement. We consider two discriminative visual features: spatial edges and temporal motion boundaries as indicators of foreground object locations. We first generate framewise spatiotemporal saliency maps using geodesic distance from these indicators. Building on the observation that foreground areas are surrounded by the regions with high spatiotemporal edge values, geodesic distance provides an initial estimation for foreground and background. Then, high-quality saliency results are produced via the geodesic distances to background regions in the subsequent frames. Through the resulting saliency maps, we build global appearance models for foreground and background. By imposing motion continuity, we establish a dynamic location model for each frame. Finally, the spatiotemporal saliency maps, appearance models and dynamic location models are combined into an energy minimization framework to attain both spatially and temporally coherent object segmentation. Extensive quantitative and qualitative experiments on benchmark video dataset demonstrate the superiority of the proposed method over the state-of-the-art algorithms.", "Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce the concept of convnet-based guidance applied to video object segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convolutional neural network (convnet) trained with static images only. The key component of our approach is a combination of offline and online learning strategies, where the former produces a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations such as bounding boxes and segments while leveraging an arbitrary amount of annotated frames. Therefore our system is suitable for diverse applications with different requirements in terms of accuracy and efficiency. In our extensive evaluation, we obtain competitive results on three different datasets, independently from the type of input annotation.", "Video Object Segmentation, and video processing in general, has been historically dominated by methods that rely on the temporal consistency and redundancy in consecutive video frames. When the temporal smoothness is suddenly broken, such as when an object is occluded, or some frames are missing in a sequence, the result of these methods can deteriorate significantly or they may not even produce any result at all. This paper explores the orthogonal approach of processing each frame independently, i.e disregarding the temporal information. In particular, it tackles the task of semi-supervised video object segmentation: the separation of an object from the background in a video, given its mask in the first frame. We present Semantic One-Shot Video Object Segmentation (OSVOS-S), based on a fully-convolutional neural network architecture that is able to successively transfer generic semantic information, learned on ImageNet, to the task of foreground segmentation, and finally to learning the appearance of a single annotated object of the test sequence (hence one shot). We show that instance level semantic information, when combined effectively, can dramatically improve the results of our previous method, OSVOS. We perform experiments on two recent video segmentation databases, which show that OSVOS-S is both the fastest and most accurate method in the state of the art.", "Recent development in fully convolutional neural network enables efficient end-to-end learning of semantic segmentation. Traditionally, the convolutional classifiers are taught to learn the representative semantic features of labeled semantic objects. In this work, we propose a reverse attention network (RAN) architecture that trains the network to capture the opposite concept (i.e., what are not associated with a target class) as well. The RAN is a three-branch network that performs the direct, reverse and reverse-attention learning processes simultaneously. Extensive experiments are conducted to show the effectiveness of the RAN in semantic segmentation. Being built upon the DeepLabv2-LargeFOV, the RAN achieves the state-of-the-art mIoU score (48.1 ) for the challenging PASCAL-Context dataset. Significant performance improvements are also observed for the PASCAL-VOC, Person-Part, NYUDv2 and ADE20K datasets.", "A semi-supervised online video object segmentation algorithm, which accepts user annotations about a target object at the first frame, is proposed in this work. We propagate the segmentation labels at the previous frame to the current frame using optical flow vectors. However, the propagation is error-prone. Therefore, we develop the convolutional trident network (CTN), which has three decoding branches: separative, definite foreground, and definite background decoders. Then, we perform Markov random field optimization based on outputs of the three decoders. We sequentially carry out these processes from the second to the last frames to extract a segment track of the target object. Experimental results demonstrate that the proposed algorithm significantly outperforms the state-of-the-art conventional algorithms on the DAVIS benchmark dataset.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond." ] }
1812.07712
2905500782
Unsupervised video object segmentation is a crucial application in video analysis without knowing any prior information about the objects. It becomes tremendously challenging when multiple objects occur and interact in a given video clip. In this paper, a novel unsupervised video object segmentation approach via distractor-aware online adaptation (DOA) is proposed. DOA models spatial-temporal consistency in video sequences by capturing background dependencies from adjacent frames. Instance proposals are generated by the instance segmentation network for each frame and then selected by motion information as hard negatives if they exist and positives. To adopt high-quality hard negatives, the block matching algorithm is then applied to preceding frames to track the associated hard negatives. General negatives are also introduced in case that there are no hard negatives in the sequence and experiments demonstrate both kinds of negatives (distractors) are complementary. Finally, we conduct DOA using the positive, negative, and hard negative masks to update the foreground background segmentation. The proposed approach achieves state-of-the-art results on two benchmark datasets, DAVIS 2016 and FBMS-59 datasets.
Unsupervised VOS algorithms @cite_19 @cite_40 @cite_57 @cite_34 @cite_8 @cite_32 @cite_62 @cite_39 @cite_37 attempt to extract the primary object segmentation with no manual annotations. Several unsupervised VOS algorithms @cite_31 @cite_21 cluster the boundary pixels hierarchically to generate mid-level video segmentations. ARP @cite_28 utilizes the recurrent primary object to initialize the segmentation and then refines the initial mask by iteratively augmenting with missing parts or reducing them by excluding noisy parts. Recently, deep learning based methods @cite_38 @cite_1 @cite_54 @cite_32 @cite_62 have been proposed to utilize both motion boundaries and saliency maps to identify the primary object. Two-stream FCNs @cite_46 , LVO @cite_24 and FSEG @cite_1 , are proposed to jointly exploit appearance and motion features. FSEG further boosts the performance by utilizing weakly annotated videos, while LVO forwards the concatenated features to bidirectional convolutional GRU. MBN @cite_62 combines the background from motion-based bilateral network with instance embeddings to boost the performance.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_62", "@cite_8", "@cite_28", "@cite_46", "@cite_54", "@cite_21", "@cite_1", "@cite_32", "@cite_39", "@cite_57", "@cite_19", "@cite_40", "@cite_24", "@cite_31", "@cite_34" ], "mid": [ "2952390197", "2952663681", "2894890793", "2155598147", "2737008123", "2952632681", "2895340898", "", "2582761847", "2781888261", "", "2113708607", "2462481369", "2156252543", "2950083900", "159595522", "" ], "abstract": [ "The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved. We address this challenging task by learning motion patterns in videos. The core of our approach is a fully convolutional network, which is learned entirely from synthetic video sequences, and their ground-truth optical flow and motion segmentation. This encoder-decoder style architecture first learns a coarse representation of the optical flow field features, and then refines it iteratively to produce motion labels at the original high-resolution. We further improve this labeling with an objectness map and a conditional random field, to account for errors in optical flow, and also to focus on moving \"things\" rather than \"stuff\". The output label of each pixel denotes whether it has undergone independent motion, i.e., irrespective of camera motion. We demonstrate the benefits of this learning framework on the moving object segmentation task, where the goal is to segment all objects in motion. Our approach outperforms the top method on the recently released DAVIS benchmark dataset, comprising real-world sequences, by 5.6 . We also evaluate on the Berkeley motion segmentation database, achieving state-of-the-art results.", "One major technique debt in video object segmentation is to label the object masks for training instances. As a result, we propose to prepare inexpensive, yet high quality pseudo ground truth corrected with motion cue for video object segmentation training. Our method conducts semantic segmentation using instance segmentation networks and, then, selects the segmented object of interest as the pseudo ground truth based on the motion information. Afterwards, the pseudo ground truth is exploited to finetune the pretrained objectness network to facilitate object segmentation in the remaining frames of the video. We show that the pseudo ground truth could effectively improve the segmentation performance. This straightforward unsupervised video object segmentation method is more efficient than existing methods. Experimental results on DAVIS and FBMS show that the proposed method outperforms state-of-the-art unsupervised segmentation methods on various benchmark datasets. And the category-agnostic pseudo ground truth has great potential to extend to multiple arbitrary object tracking.", "In this work, we study the unsupervised video object segmentation problem where moving objects are segmented without prior knowledge of these objects. First, we propose a motion-based bilateral network to estimate the background based on the motion pattern of non-object regions. The bilateral network reduces false positive regions by accurately identifying background objects. Then, we integrate the background estimate from the bilateral network with instance embeddings into a graph, which allows multiple frame reasoning with graph edges linking pixels from different frames. We classify graph nodes by defining and minimizing a cost function, and segment the video frames based on the node labels. The proposed method outperforms previous state-of-the-art unsupervised video object segmentation methods against the DAVIS 2016 and the FBMS-59 datasets.", "In this paper, we propose a novel approach to extract primary object segments in videos in the object proposal' domain. The extracted primary object regions are then used to build object models for optimized video segmentation. The proposed approach has several contributions: First, a novel layered Directed Acyclic Graph (DAG) based framework is presented for detection and segmentation of the primary object in video. We exploit the fact that, in general, objects are spatially cohesive and characterized by locally smooth motion trajectories, to extract the primary object from the set of all available proposals based on motion, appearance and predicted-shape similarity across frames. Second, the DAG is initialized with an enhanced object proposal set where motion based proposal predictions (from adjacent frames) are used to expand the set of object proposals for a particular frame. Last, the paper presents a motion scoring function for selection of object proposals that emphasizes high optical flow gradients at proposal boundaries to discriminate between moving objects and the background. The proposed approach is evaluated using several challenging benchmark videos and it outperforms both unsupervised and supervised state-of-the-art methods.", "A novel algorithm to segment a primary object in a video sequence is proposed in this work. First, we generate candidate regions for the primary object using both color and motion edges. Second, we estimate initial primary object regions, by exploiting the recurrence property of the primary object. Third, we augment the initial regions with missing parts or reducing them by excluding noisy parts repeatedly. This augmentation and reduction process (ARP) identifies the primary object region in each frame. Experimental results demonstrate that the proposed algorithm significantly outperforms the state-of-the-art conventional algorithms on recent benchmark datasets.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "This paper proposes a fast video salient object detection model, based on a novel recurrent network architecture, named Pyramid Dilated Bidirectional ConvLSTM (PDB-ConvLSTM). A Pyramid Dilated Convolution (PDC) module is first designed for simultaneously extracting spatial features at multiple scales. These spatial features are then concatenated and fed into an extended Deeper Bidirectional ConvLSTM (DB-ConvLSTM) to learn spatiotemporal information. Forward and backward ConvLSTM units are placed in two layers and connected in a cascaded way, encouraging information flow between the bi-directional streams and leading to deeper feature extraction. We further augment DB-ConvLSTM with a PDC-like structure, by adopting several dilated DB-ConvLSTMs to extract multi-scale spatiotemporal information. Extensive experimental results show that our method outperforms previous video saliency models in a large margin, with a real-time speed of 20 fps on a single GPU. With unsupervised video object segmentation as an example application, the proposed model (with a CRF-based post-process) achieves state-of-the-art results on two popular benchmarks, well demonstrating its superior performance and high applicability.", "", "We propose an end-to-end learning framework for segmenting generic objects in videos. Our method learns to combine appearance and motion information to produce pixel level segmentation masks for all prominent objects in videos. We formulate this task as a structured prediction problem and design a two-stream fully convolutional neural network which fuses together motion and appearance in a unified framework. Since large-scale video datasets with pixel level segmentations are problematic, we show how to bootstrap weakly annotated videos together with existing image recognition datasets for training. Through experiments on three challenging video segmentation benchmarks, our method substantially improves the state-of-the-art for segmenting generic (unseen) objects. Code and pre-trained models are available on the project website.", "We propose a method for unsupervised video object segmentation by transferring the knowledge encapsulated in image-based instance embedding networks. The instance embedding network produces an embedding vector for each pixel that enables identifying all pixels belonging to the same object. Though trained on static images, the instance embeddings are stable over consecutive video frames, which allows us to link objects together over time. Thus, we adapt the instance networks trained on static images to video object segmentation and incorporate the embeddings with objectness and optical flow features, without model retraining or online fine-tuning. The proposed method outperforms state-of-the-art unsupervised segmentation methods in the DAVIS dataset and the FBMS dataset.", "", "We present a technique for separating foreground objects from the background in a video. Our method is fast, fully automatic, and makes minimal assumptions about the video. This enables handling essentially unconstrained settings, including rapidly moving background, arbitrary object motion and appearance, and non-rigid deformations and articulations. In experiments on two datasets containing over 1400 video shots, our method outperforms a state-of-the-art background subtraction technique [4] as well as methods based on clustering point tracks [6, 18, 19]. Moreover, it performs comparably to recent video object segmentation methods based on object proposals [14, 16, 27], while being orders of magnitude faster.", "An unsupervised video object segmentation algorithm, which discovers a primary object in a video sequence automatically, is proposed in this work. We introduce three energies in terms of foreground and background probability distributions: Markov, spatiotemporal, and antagonistic energies. Then, we minimize a hybrid of the three energies to separate a primary object from its background. However, the hybrid energy is nonconvex. Therefore, we develop the alternate convex optimization (ACO) scheme, which decomposes the nonconvex optimization into two quadratic programs. Moreover, we propose the forward-backward strategy, which performs the segmentation sequentially from the first to the last frames and then vice versa, to exploit temporal correlations. Experimental results on extensive datasets demonstrate that the proposed ACO algorithm outperforms the state-of-the-art techniques significantly.", "In this paper, we address the problem of video object segmentation, which is to automatically identify the primary object and segment the object out in every frame. We propose a novel formulation of selecting object region candidates simultaneously in all frames as finding a maximum weight clique in a weighted region graph. The selected regions are expected to have high objectness score (unary potential) as well as share similar appearance (binary potential). Since both unary and binary potentials are unreliable, we introduce two types of mutex (mutual exclusion) constraints on regions in the same clique: intra-frame and inter-frame constraints. Both types of constraints are expressed in a single quadratic form. We propose a novel algorithm to compute the maximal weight cliques that satisfy the constraints. We apply our method to challenging benchmark videos and obtain very competitive results that outperform state-of-the-art methods.", "This paper addresses the task of segmenting moving objects in unconstrained videos. We introduce a novel two-stream neural network with an explicit memory module to achieve this. The two streams of the network encode spatial and temporal features in a video sequence respectively, while the memory module captures the evolution of objects over time. The module to build a \"visual memory\" in video, i.e., a joint representation of all the video frames, is realized with a convolutional recurrent unit learned from a small number of training video sequences. Given a video frame as input, our approach assigns each pixel an object or background label based on the learned spatio-temporal features as well as the \"visual memory\" specific to the video, acquired automatically without any manually-annotated frames. The visual memory is implemented with convolutional gated recurrent units, which allows to propagate spatial information over time. We evaluate our method extensively on two benchmarks, DAVIS and Freiburg-Berkeley motion segmentation datasets, and show state-of-the-art results. For example, our approach outperforms the top method on the DAVIS dataset by nearly 6 . We also provide an extensive ablative analysis to investigate the influence of each component in the proposed framework.", "The use of video segmentation as an early processing step in video analysis lags behind the use of image segmentation for image analysis, despite many available video segmentation methods. A major reason for this lag is simply that videos are an order of magnitude bigger than images; yet most methods require all voxels in the video to be loaded into memory, which is clearly prohibitive for even medium length videos. We address this limitation by proposing an approximation framework for streaming hierarchical video segmentation motivated by data stream algorithms: each video frame is processed only once and does not change the segmentation of previous frames. We implement the graph-based hierarchical segmentation method within our streaming framework; our method is the first streaming hierarchical video segmentation method proposed. We perform thorough experimental analysis on a benchmark video data set and longer videos. Our results indicate the graph-based streaming hierarchical method outperforms other streaming video segmentation methods and performs nearly as well as the full-video hierarchical graph-based method.", "" ] }
1812.07712
2905500782
Unsupervised video object segmentation is a crucial application in video analysis without knowing any prior information about the objects. It becomes tremendously challenging when multiple objects occur and interact in a given video clip. In this paper, a novel unsupervised video object segmentation approach via distractor-aware online adaptation (DOA) is proposed. DOA models spatial-temporal consistency in video sequences by capturing background dependencies from adjacent frames. Instance proposals are generated by the instance segmentation network for each frame and then selected by motion information as hard negatives if they exist and positives. To adopt high-quality hard negatives, the block matching algorithm is then applied to preceding frames to track the associated hard negatives. General negatives are also introduced in case that there are no hard negatives in the sequence and experiments demonstrate both kinds of negatives (distractors) are complementary. Finally, we conduct DOA using the positive, negative, and hard negative masks to update the foreground background segmentation. The proposed approach achieves state-of-the-art results on two benchmark datasets, DAVIS 2016 and FBMS-59 datasets.
Hard negative mining has also been exploited in deep learning models to improve the performance. OHEM @cite_45 trains region-based object detectors using automatically selected hard examples, and yields significant boosts in detection performance on both PASCAL @cite_6 and MS COCO @cite_29 datasets. Focal loss @cite_52 is designed to down-weight the loss assigned for well-classified examples and focuses on the training on hard examples. Effective bootstrapping of hard examples is also applied in face detection @cite_36 , pedestrian detection @cite_0 , and tracking @cite_7 etc. Both trackers and static image object detectors are applied to select hard examples by finding the inconsistency between the tracklets and object detections from unlabeled videos @cite_20 . In @cite_41 , a trained detector is utilized to find the isolated detection, which is marked as a hard negative, from the preceding and following detections. In the proposed approach, we focus on developing an online hard example selection strategy for video object segmentation.
{ "cite_N": [ "@cite_7", "@cite_36", "@cite_41", "@cite_29", "@cite_52", "@cite_6", "@cite_0", "@cite_45", "@cite_20" ], "mid": [ "1857884451", "2477332545", "2952821979", "", "", "", "2113635748", "", "2133434696" ], "abstract": [ "We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks.", "Recently significant performance improvement in face detection was made possible by deeply trained convolutional networks. In this report, a novel approach for training state-of-the-art face detector is described. The key is to exploit the idea of hard negative mining and iteratively update the Faster R-CNN based face detector with the hard negatives harvested from a large set of background examples. We demonstrate that our face detector outperforms state-of-the-art detectors on the FDDB dataset, which is the de facto standard for evaluating face detection algorithms.", "Important gains have recently been obtained in object detection by using training objectives that focus on hard negative examples, i.e., negative examples that are currently rated as positive or ambiguous by the detector. These examples can strongly influence parameters when the network is trained to correct them. Unfortunately, they are often sparse in the training data, and are expensive to obtain. In this work, we show how large numbers of hard negatives can be obtained automatically by analyzing the output of a trained detector on video sequences. In particular, detections that are isolated in time , i.e., that have no associated preceding or following detections, are likely to be hard negatives. We describe simple procedures for mining large numbers of such hard negatives (and also hard positives ) from unlabeled video data. Our experiments show that retraining detectors on these automatically obtained examples often significantly improves performance. We present experiments on multiple architectures and multiple data sets, including face detection, pedestrian detection and other object categories.", "", "", "", "Boosted decision trees are among the most popular learning techniques in use today. While exhibiting fast speeds at test time, relatively slow training renders them impractical for applications with real-time learning requirements. We propose a principled approach to overcome this drawback. We prove a bound on the error of a decision stump given its preliminary error on a subset of the training data; the bound may be used to prune unpromising features early in the training process. We propose a fast training algorithm that exploits this bound, yielding speedups of an order of magnitude at no cost in the final performance of the classifier. Our method is not a new variant of Boosting; rather, it is used in conjunction with existing Boosting algorithms and other sampling methods to achieve even greater speedups.", "", "Typical object detectors trained on images perform poorly on video, as there is a clear distinction in domain between the two types of data. In this paper, we tackle the problem of adapting object detectors learned from images to work well on videos. We treat the problem as one of unsupervised domain adaptation, in which we are given labeled data from the source domain (image), but only unlabeled data from the target domain (video). Our approach, self-paced domain adaptation, seeks to iteratively adapt the detector by re-training the detector with automatically discovered target domain examples, starting with the easiest first. At each iteration, the algorithm adapts by considering an increased number of target domain examples, and a decreased number of source domain examples. To discover target domain examples from the vast amount of video data, we introduce a simple, robust approach that scores trajectory tracks instead of bounding boxes. We also show how rich and expressive features specific to the target domain can be incorporated under the same framework. We show promising results on the 2011 TRECVID Multimedia Event Detection [1] and LabelMe Video [2] datasets that illustrate the benefit of our approach to adapt object detectors to video." ] }
1812.07264
2947759080
Abstract Cloud services and other shared third-party infrastructures allow individual content providers to easily scale their services based on current resource demands. In this paper, we consider an individual content provider that wants to minimize its delivery costs under the assumptions that the storage and bandwidth resources it requires are elastic, the content provider only pays for the resources that it consumes, and costs are proportional to the resource usage. Within this context, we (i) derive worst-case bounds for the optimal cost and competitive cost ratios of different classes of cache on M th request cache insertion policies, (ii) derive explicit average cost expressions and bounds under arbitrary inter-request distributions, (iii) derive explicit average cost expressions and bounds for short-tailed (deterministic, Erlang, and exponential) and heavy-tailed (Pareto) inter-request distributions, and (iv) present numeric and trace-based evaluations that reveal insights into the relative cost performance of the policies. Our results show that a window-based cache on 2nd request policy using a single threshold optimized to minimize worst-case costs provides good average performance across the different distributions and the full parameter ranges of each considered distribution, making it an attractive choice for a wide range of practical conditions where request rates of individual file objects typically are not known and can change quickly.
Most existing caching works focus on replacement policies @cite_21 @cite_18 . However, recently it has been shown that the cache insertion policies play a very important factor in reducing the total delivery costs @cite_22 @cite_1 . Motivated by these works, this paper focuses on the delivery cost differences between different discriminatory selective cache insertion policies.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_1", "@cite_22" ], "mid": [ "2006062966", "1978690967", "2528947498", "2406213739" ], "abstract": [ "Academic and corporate communities have been dedicating considerable effort to World Wide Web caching. When correctly deployed, Web caching systems can lead to significant bandwidth savings, server load balancing, perceived network latency reduction, and higher content availability. We survey the state of the art in caching designs, presenting a taxonomy of architectures and describing a variety of specific trends and techniques.", "Web caching is an important technique to scale the Internet. One important performance factor of Web caches is the replacement strategy. Due to specific characteristics of the World Wide Web, there exist a huge number of proposals for cache replacement. This article proposes a classification for these proposals that subsumes prior classifications. Using this classification, different proposals and their advantages and disadvantages are described. Furthermore, the article discusses the importance of cache replacement strategies in modern proxy caches and outlines potential future research topics.", "The ephemeral content popularity seen with many content delivery applications can make indiscriminate on-demand caching in edge networks highly inefficient, since many of the content items that are added to the cache will not be requested again from that network. In this paper, we address the problem of designing and evaluating more selective edge-network caching policies. The need for such policies is demonstrated through an analysis of a dataset recording YouTube video requests from users on an edge network over a 20-month period. We then develop a novel workload modelling approach for such applications and apply it to study the performance of alternative edge caching policies, including indiscriminate caching and cache on @math th request for different @math . The latter policies are found able to greatly reduce the fraction of the requested items that are inserted into the cache, at the cost of only modest increases in cache miss rate. Finally, we quantify and explore the potential room for improvement from use of other possible predictors of further requests. We find that although room for substantial improvement exists when comparing performance to that of a perfect “oracle” policy, such improvements are unlikely to be achievable in practice.", "This paper \"peeks under the covers\" at the subsystems that provide the basic functionality of a leading content delivery network. Based on our experiences in building one of the largest distributed systems in the world, we illustrate how sophisticated algorithmic research has been adapted to balance the load between and within server clusters, manage the caches on servers, select paths through an overlay routing network, and elect leaders in various contexts. In each instance, we first explain the theory underlying the algorithms, then introduce practical considerations not captured by the theoretical models, and finally describe what is implemented in practice. Through these examples, we highlight the role of algorithmic research in the design of complex networked systems. The paper also illustrates the close synergy that exists between research and industry where research ideas cross over into products and product requirements drive future research." ] }
1812.07264
2947759080
Abstract Cloud services and other shared third-party infrastructures allow individual content providers to easily scale their services based on current resource demands. In this paper, we consider an individual content provider that wants to minimize its delivery costs under the assumptions that the storage and bandwidth resources it requires are elastic, the content provider only pays for the resources that it consumes, and costs are proportional to the resource usage. Within this context, we (i) derive worst-case bounds for the optimal cost and competitive cost ratios of different classes of cache on M th request cache insertion policies, (ii) derive explicit average cost expressions and bounds under arbitrary inter-request distributions, (iii) derive explicit average cost expressions and bounds for short-tailed (deterministic, Erlang, and exponential) and heavy-tailed (Pareto) inter-request distributions, and (iv) present numeric and trace-based evaluations that reveal insights into the relative cost performance of the policies. Our results show that a window-based cache on 2nd request policy using a single threshold optimized to minimize worst-case costs provides good average performance across the different distributions and the full parameter ranges of each considered distribution, making it an attractive choice for a wide range of practical conditions where request rates of individual file objects typically are not known and can change quickly.
Few papers (regardless of replacement policy) have modeled discriminatory selective cache insertion policies such as . This class of policies is motivated by the risk of cache pollution due to ephemeral content popularity and the long tail of one-timers (one-hit wonders) observed in edge networks @cite_12 @cite_15 @cite_11 @cite_1 . Recent works including trace-based evaluations of policies @cite_22 @cite_1 . Carlsson and Eager @cite_1 also present simple analytic models for hit and insertion probabilities. However, in contrast to the analysis presented here, they ignore cache replacement, assuming they assume that content is not evicted until interest in the content has expired. @cite_0 @cite_25 and Gast and Van Houdt @cite_20 @cite_9 present TTL-based recurrence expressions and approximations for two variations of , referred to as k-LRU and LRU(m) in their works. However, none of these works present performance bounds or consider the total delivery cost. In contrast, we derive both worst-case bounds and average-case analysis under a cost model that captures both bandwidth and storage costs.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_1", "@cite_0", "@cite_15", "@cite_20", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2406213739", "2524013375", "2528947498", "2400388591", "2061442141", "2046128376", "", "1986697120", "" ], "abstract": [ "This paper \"peeks under the covers\" at the subsystems that provide the basic functionality of a leading content delivery network. Based on our experiences in building one of the largest distributed systems in the world, we illustrate how sophisticated algorithmic research has been adapted to balance the load between and within server clusters, manage the caches on servers, select paths through an overlay routing network, and elect leaders in various contexts. In each instance, we first explain the theory underlying the algorithms, then introduce practical considerations not captured by the theoretical models, and finally describe what is implemented in practice. Through these examples, we highlight the role of algorithmic research in the design of complex networked systems. The paper also illustrates the close synergy that exists between research and industry where research ideas cross over into products and product requirements drive future research.", "Computer system and network performance can be significantly improved by caching frequently used information. When the cache size is limited, the cache replacement algorithm has an important impact on the effectiveness of caching. In this paper we introduce time-to-live (TTL) approximations to determine the cache hit probability of two classes of cache replacement algorithms: the recently introduced h-LRU and LRU(m). These approximations only require the requests to be generated according to a general Markovian arrival process (MAP). This includes phase-type renewal processes and the IRM model as special cases. We provide both numerical and theoretical support for the claim that the proposed TTL approximations are asymptotically exact. In particular, we show that the transient hit probability converges to the solution of a set of ODEs (under the IRM model), where the fixed point of the set of ODEs corresponds to the TTL approximation. We further show, by using synthetic and trace-based workloads, that h-LRU and LRU(m) perform alike, while the latter requires less work when a hit miss occurs. We also show that, as opposed to LRU, h-LRU and LRU(m) are sensitive to the correlation between consecutive inter-request times.", "The ephemeral content popularity seen with many content delivery applications can make indiscriminate on-demand caching in edge networks highly inefficient, since many of the content items that are added to the cache will not be requested again from that network. In this paper, we address the problem of designing and evaluating more selective edge-network caching policies. The need for such policies is demonstrated through an analysis of a dataset recording YouTube video requests from users on an edge network over a 20-month period. We then develop a novel workload modelling approach for such applications and apply it to study the performance of alternative edge caching policies, including indiscriminate caching and cache on @math th request for different @math . The latter policies are found able to greatly reduce the fraction of the requested items that are inserted into the cache, at the cost of only modest increases in cache miss rate. Finally, we quantify and explore the potential room for improvement from use of other possible predictors of further requests. We find that although room for substantial improvement exists when comparing performance to that of a perfect “oracle” policy, such improvements are unlikely to be achievable in practice.", "", "User-Generated Content has become very popular since new web services such as YouTube allow for the distribution of user-produced media content. YouTube-like services are different from existing traditional VoD services in that the service provider has only limited control over the creation of new content. We analyze how content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. Based on these measurements, we analyzed the duration and the data rate of streaming sessions, the popularity of videos, and access patterns for video clips from the clients in the campus network. The analysis of the traffic shows that trace statistics are relatively stable over short-term periods while long-term trends can be observed. We demonstrate how synthetic traces can be generated from the measured traces and show how these synthetic traces can be used as inputs to trace-driven simulations. We also analyze the benefits of alternative distribution infrastructures to improve the performance of a YouTube-like VoD service. The results of these simulations show that P2P-based distribution and proxy caching can reduce network traffic significantly and allow for faster access to video clips.", "In this paper we study the performance of a family of cache replacement algorithms. The cache is decomposed into lists. Items enter the cache via the first list. An item enters the cache via the first list and jumps to the next list whenever a hit on it occurs. The classical policies FIFO, RANDOM, CLIMB and its hybrids are obtained as special cases. We present explicit expressions for the cache content distribution and miss probability under the IRM model. We develop an algorithm with a time complexity that is polynomial in the cache size and linear in the number of items to compute the exact miss probability. We introduce lower and upper bounds on the latter that can be computed in a time that is linear in the cache size times the number of items. We further introduce a mean field model to approximate the transient behavior of the miss probability and prove that this model becomes exact as the cache size and number of items tends to infinity. We show that the set of ODEs associated to the mean field model has a unique fixed point that can be used to approximate the miss probability in case the exact computation becomes too time consuming. Using this approximation, we provide guidelines on how to select a replacement algorithm within the family considered such that a good trade-off is achieved between the cache reactivity and its steady-state hit probability. We simulate these cache replacement algorithms on traces of real data and show that they can outperform LRU. Finally, we also disprove the well-known conjecture that the CLIMB algorithm is the optimal finite-memory replacement algorithm under the IRM model.", "", "This paper presents a traffic characterization study of the popular video sharing service, YouTube. Over a three month period we observed almost 25 million transactions between users on an edge network and YouTube, including more than 600,000 video downloads. We also monitored the globally popular videos over this period of time. In the paper we examine usage patterns, file properties, popularity and referencing characteristics, and transfer behaviors of YouTube, and compare them to traditional Web and media streaming workload characteristics. We conclude the paper with a discussion of the implications of the observed characteristics. For example, we find that as with the traditional Web, caching could improve the end user experience, reduce network bandwidth consumption, and reduce the load on YouTube's core server infrastructure. Unlike traditional Web caching, Web 2.0 provides additional meta-data that should be exploited to improve the effectiveness of strategies like caching.", "" ] }
1812.07264
2947759080
Abstract Cloud services and other shared third-party infrastructures allow individual content providers to easily scale their services based on current resource demands. In this paper, we consider an individual content provider that wants to minimize its delivery costs under the assumptions that the storage and bandwidth resources it requires are elastic, the content provider only pays for the resources that it consumes, and costs are proportional to the resource usage. Within this context, we (i) derive worst-case bounds for the optimal cost and competitive cost ratios of different classes of cache on M th request cache insertion policies, (ii) derive explicit average cost expressions and bounds under arbitrary inter-request distributions, (iii) derive explicit average cost expressions and bounds for short-tailed (deterministic, Erlang, and exponential) and heavy-tailed (Pareto) inter-request distributions, and (iv) present numeric and trace-based evaluations that reveal insights into the relative cost performance of the policies. Our results show that a window-based cache on 2nd request policy using a single threshold optimized to minimize worst-case costs provides good average performance across the different distributions and the full parameter ranges of each considered distribution, making it an attractive choice for a wide range of practical conditions where request rates of individual file objects typically are not known and can change quickly.
Finally, it is important to note that TTL-based replacement eviction policies @cite_7 @cite_2 (and variations thereof @cite_4 ) (considered in this paper) have have been found useful for approximating the performance of capacity-driven replacement policies such as LRU @cite_26 @cite_3 @cite_28 @cite_14 @cite_24 . For an individual content provider, our results may therefore also be applicable to the case in which the provider use Our results may therefore also be applicable to provide insight for the case in which a content provider uses a fixed-sized cache. Generalizations of the TTL-based Che-approximation @cite_26 and TTL-based caches in general have proven useful to analyze individual caches @cite_26 @cite_3 @cite_28 @cite_14 @cite_24 , networks of caches @cite_6 @cite_16 @cite_13 @cite_14 @cite_24 , and to optimize different system designs @cite_27 @cite_23 @cite_19 @cite_10 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_14", "@cite_7", "@cite_28", "@cite_10", "@cite_3", "@cite_6", "@cite_24", "@cite_19", "@cite_27", "@cite_23", "@cite_2", "@cite_16", "@cite_13" ], "mid": [ "2150495639", "2790763892", "", "2168282297", "", "2529874850", "", "1549860141", "", "2419063994", "2098792022", "2262385276", "2050680603", "", "" ], "abstract": [ "This paper aims at finding fundamental design principles for hierarchical Web caching. An analytical modeling technique is developed to characterize an uncooperative two-level hierarchical caching system where the least recently used (LRU) algorithm is locally run at each cache. With this modeling technique, we are able to identify a characteristic time for each cache, which plays a fundamental role in understanding the caching processes. In particular, a cache can be viewed roughly as a low-pass filter with its cutoff frequency equal to the inverse of the characteristic time. Documents with access frequencies lower than this cutoff frequency have good chances to pass through the cache without cache hits. This viewpoint enables us to take any branch of the cache tree as a tandem of low-pass filters at different cutoff frequencies, which further results in the finding of two fundamental design principles. Finally, to demonstrate how to use the principles to guide the caching algorithm design, we propose a cooperative hierarchical Web caching architecture based on these principles. Both model-based and real trace simulation studies show that the proposed cooperative architecture results in more than 50 memory saving and substantial central processing unit (CPU) power saving for the management and update of cache entries compared with the traditional uncooperative hierarchical caching architecture.", "By caching content at geographically distributed servers, content delivery applications can achieve scalability and reduce wide-area network traffic. However, each deployed cache has an associated cost. When the request rate from the local region is sufficiently high this cost will be justified, but as the request rate varies, for example according to a daily cycle, there may be long periods when the benefit of the cache does not justify the cost. Cloud computing offers a solution to problems of this kind, by supporting the dynamic allocation and release of resources according to need. In this paper, we analyze the potential benefits from dynamically instantiating caches using resources from cloud service providers. We develop novel analytic caching models that accommodate time-varying request rates, transient behavior as a cache fills following instantiation, and selective cache insertion policies. Using these models, within the context of a simple cost model, we then develop bounds and compare policies with optimized parameter selections to obtain insights into key cost performance tradeoffs. We find that dynamic cache instantiation has the potential to provide substantial cost reductions in some cases, but that this potential is strongly dependent on the object popularity skew. We also find that selective \"Cache on k-th request\" cache insertion policies can be even more beneficial in this context than with conventional edge caches.", "", "This paper presents a way of modeling the hit rates of caches that use a time-to-live (TTL)-based consistency policy. TTL-based consistency, as exemplified by DNS and Web caches, is a policy in which a data item, once retrieved, remains valid for a period known as the \"time-to-live\". Cache systems using large TTL periods are known to have high hit rates and scale well, but the effects of using shorter TTL periods are not well understood. We model hit rate as a function of request arrival times and the choice of TTL, enabling us to better understand cache behavior for shorter TTL periods. Our formula for the hit rate is closed form and relies upon a simplifying assumption about the interarrival times of requests for the data item in question: that these requests can be modeled as a sequence of independent and identically distributed random variables. Analyzing extensive DNS traces, we find that the results of the formula match observed statistics surprisingly well; in particular, the analysis is able to adequately explain the somewhat counterintuitive empirical finding of that the cache hit rate for DNS accesses rapidly increases as a function of TTL, exceeding 80 for a TTL of 15 minutes.", "", "There has been increasing interest in designing and developing highly scalable infrastructures to support the efficient distribution of content. This has led to the recent development of content-oriented network architectures that rely on on-demand caching. This paper addresses the question of how a cache provider can monetize its service. Standard cache management policies such as least recently used (LRU) treat different content in a strongly coupled manner that makes it difficult for a cache provider to design individualized contracts. We propose the use of timer-based caching for the purpose of designing contracts, which allow providers to monetize caching. We focus on on-demand request-based contracts that allow content providers (CPs) to negotiate contracts at the time that requests are made. We propose and analyze three variations, one where a contract is negotiated only at the time of a miss, and two where contracts are negotiated at the times of both misses and hits. The latter two differ from one another according to whether pricing is based on cache occupancy (time content spends in the cache) or on request rate. We conclude that the first one is least preferable and that the last one provides the provider greater opportunity for profit and greater flexibility to CPs.", "", "Many researchers have been working on the performance analysis of caching in Information-Centric Networks (ICNs) under various replacement policies like Least Recently Used (LRU), FIFO or Random (RND). However, no exact results are provided, and many approximate models do not scale even for the simple network of two caches connected in tandem. In this paper, we introduce a Time-To-Live based policy (TTL), that assigns a timer to each content stored in the cache and redraws the timer each time the content is requested (at each hit miss). We show that our TTL policy is more general than LRU, FIFO or RND, since it is able to mimic their behavior under an appropriate choice of its parameters. Moreover, the analysis of networks of TTL-based caches appears simpler not only under the Independent Reference Model (IRM, on which many existing results rely) but also with the Renewal Model for requests. In particular, we determine exact formulas for the performance metrics of interest for a linear network and a tree network with one root cache and N leaf caches. For more general networks, we propose an approximate solution with the relative errors smaller than 10−3 and 10−2 for exponentially distributed and constant TTLs respectively.", "", "In this paper we analyze the hit performance of cache systems that receive file requests with general arrival distributions and different popularities. We consider timer-based (TTL) policies, with differentiated timers over which we optimize. The optimal policy is shown to be related to the monotonicity of the hazard rate function of the inter-arrival distribution. In particular for decreasing hazard rates, timer policies outperform the static policy of caching the most popular contents. We provide explicit solutions for the optimal policy in the case of Pareto-distributed inter-request times and a Zipf distribution of file popularities, including a compact fluid characterization in the limit of a large number of files. We compare it through simulation with classical policies, such as least-recently-used and discuss its performance. Finally, we analyze extensions of the optimization framework to a line network of caches.", "Geographically distributed cloud platforms enable an attractive approach to large-scale content delivery. Storage at various sites can be dynamically acquired from (and released back to) the cloud ...", "In any caching system, the admission and eviction policies determine which contents are added and removed from a cache when a miss occurs. Usually, these policies are devised so as to mitigate staleness and increase the hit probability. Nonetheless, the utility of having a high hit probability can vary across contents. This occurs, for instance, when service level agreements must be met, or if certain contents are more difficult to obtain than others. In this paper, we propose utility-driven caching, where we associate with each content a utility, which is a function of the corresponding content hit probability. We formulate optimization problems where the objectives are to maximize the sum of utilities over all contents. These problems differ according to the stringency of the cache capacity constraint. Our framework enables us to reverse engineer classical replacement policies such as LRU and FIFO, by computing the utility functions that they maximize. We also develop online algorithms that can be used by service providers to implement various caching policies based on arbitrary utility functions.", "We propose a general modeling framework to evaluate the performance of cache consistency algorithms. In addition to the usual hit rate, we introduce the hit^* rate as a consistency measure, which captures the fraction of non-stale downloads from the cache. We apply these ideas to the analysis of the fixed TTL consistency algorithm in the presence of network delays. The hit and hit^* rates are evaluated when requests and updates are modeled by renewal processes. Classical results on the renewal function lead to various bounds.", "", "" ] }
1812.07264
2947759080
Abstract Cloud services and other shared third-party infrastructures allow individual content providers to easily scale their services based on current resource demands. In this paper, we consider an individual content provider that wants to minimize its delivery costs under the assumptions that the storage and bandwidth resources it requires are elastic, the content provider only pays for the resources that it consumes, and costs are proportional to the resource usage. Within this context, we (i) derive worst-case bounds for the optimal cost and competitive cost ratios of different classes of cache on M th request cache insertion policies, (ii) derive explicit average cost expressions and bounds under arbitrary inter-request distributions, (iii) derive explicit average cost expressions and bounds for short-tailed (deterministic, Erlang, and exponential) and heavy-tailed (Pareto) inter-request distributions, and (iv) present numeric and trace-based evaluations that reveal insights into the relative cost performance of the policies. Our results show that a window-based cache on 2nd request policy using a single threshold optimized to minimize worst-case costs provides good average performance across the different distributions and the full parameter ranges of each considered distribution, making it an attractive choice for a wide range of practical conditions where request rates of individual file objects typically are not known and can change quickly.
As we show here, these type of elasticity elasticity assumptions can also be a powerful toolbox for deriving tight worst-case bounds and exact average-case cost ratios of different policies. Furthermore, as argued in the paper, as discussed in Section 9.1, since both storage costs and bandwidth costs are proportional to the file sizes, the results can also easily be extended to analyze variable scenarios with variable sized objects, at no additional computational cost. In contrast, just finding lower and upper bounds for the cache miss rate of the optimal offline optimal offline policy is computationally expensive when caches are non-elastic @cite_8 and even simple LRU is hard to analyze under non-elastic constraints @cite_5 @cite_17 .
{ "cite_N": [ "@cite_5", "@cite_17", "@cite_8" ], "mid": [ "72677960", "2057230952", "2963916427" ], "abstract": [ "", "In some network and application scenarios, it is useful to cache content in network nodes on the fly, at line rate. Resilience of in-network caches can be improved by guaranteeing that all content therein stored is valid. Digital signatures could be indeed used to verify content integrity and provenance. However, their operation may be much slower than the line rate, thus limiting caching of cryptographically verified objects to a small subset of the forwarded ones. How this affects caching performance? To answer such a question, we devise a simple analytical approach which permits to assess performance of an LRU caching strategy storing a randomly sampled subset of requests. A key feature of our model is the ability to handle traffic beyond the traditional Independent Reference Model, thus permitting us to understand how performance vary in different temporal locality conditions. Results, also verified on real world traces, show that content integrity verification does not necessarily bring about a performance penalty; rather, in some specific (but practical) conditions, performance may even improve.", "Many recent caching systems aim to improve miss ratios, but there is no good sense among practitioners of how much further miss ratios can be improved. In other words, should the systems community continue working on this problem? Currently, there is no principled answer to this question. In practice, object sizes often vary by several orders of magnitude, where computing the optimal miss ratio (OPT) is known to be NP-hard. The few known results on caching with variable object sizes provide very weak bounds and are impractical to compute on traces of realistic length. We propose a new method to compute upper and lower bounds on OPT. Our key insight is to represent caching as a min-cost flow problem, hence we call our method the flow-based offline optimal (FOO). We prove that, under simple independence assumptions, FOO's bounds become tight as the number of objects goes to infinity. Indeed, FOO's error over 10M requests of production CDN and storage traces is negligible: at most 0.3 . FOO thus reveals, for the first time, the limits of caching with variable object sizes. While FOO is very accurate, it is computationally impractical on traces with hundreds of millions of requests. We therefore extend FOO to obtain more efficient bounds on OPT, which we call practical flow-based offline optimal (PFOO). We evaluate PFOO on several full production traces and use it to compare OPT to prior online policies. This analysis shows that current caching systems are in fact still far from optimal, suffering 11-43 more cache misses than OPT, whereas the best prior offline bounds suggest that there is essentially no room for improvement." ] }
1812.07169
2904580001
This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically. The analysis of the specific rationale of each prediction made by the CNN presents a key issue of understanding neural networks, but it is also of significant practical values in certain applications. In this study, we propose to distill knowledge from the CNN into an explainable additive model, so that we can use the explainable model to provide a quantitative explanation for the CNN prediction. We analyze the typical bias-interpreting problem of the explainable model and develop prior losses to guide the learning of the explainable additive model. Experimental results have demonstrated the effectiveness of our method.
Distilling knowledge from a black-box model into an explainable model is an emerging direction in recent years. In contrast, we pursue the explicitly quantitative explanation for each CNN prediction. @cite_31 learned an explainable additive model, and @cite_30 distilled knowledge of a network into an additive model. @cite_17 @cite_5 @cite_24 @cite_32 distilled representations of neural networks into tree structures. These methods did not explain the network knowledge using human-interpretable semantic concepts. More crucially, compared to previous additive models @cite_30 , our research successfully overcomes the bias-interpreting problem, which is the core challenge when there are lots of visual concepts for explanation.
{ "cite_N": [ "@cite_30", "@cite_32", "@cite_24", "@cite_5", "@cite_31", "@cite_17" ], "mid": [ "2806874342", "", "", "2785017485", "2517259736", "2769421449" ], "abstract": [ "Machine Learning algorithms are increasingly being used in recent years due to their flexibility in model fitting and increased predictive performance. However, the complexity of the models makes them hard for the data analyst to interpret the results and explain them without additional tools. This has led to much research in developing various approaches to understand the model behavior. In this paper, we present the Explainable Neural Network (xNN), a structured neural network designed especially to learn interpretable features. Unlike fully connected neural networks, the features engineered by the xNN can be extracted from the network in a relatively straightforward manner and the results displayed. With appropriate regularization, the xNN provides a parsimonious explanation of the relationship between the features and the output. We illustrate this interpretable feature--engineering property on simulated examples.", "", "", "Model distillation was originally designed to distill knowledge from a large, complex teacher model to a faster, simpler student model without significant loss in prediction accuracy. We investigate model distillation for another goal -- transparency -- investigating if fully-connected neural networks can be distilled into models that are transparent or interpretable in some sense. Our teacher models are multilayer perceptrons, and we try two types of student models: (1) tree-based generalized additive models (GA2Ms), a type of boosted, short tree (2) gradient boosted trees (GBTs). More transparent student models are forthcoming. Our results are not yet conclusive. GA2Ms show some promise for distilling binary classification teachers, but not yet regression. GBTs are not \"directly\" interpretable but may be promising for regression teachers. GA2M models may provide a computationally viable alternative to additive decomposition methods for global function approximation.", "Accuracy and interpretation are two goals of any successful predictive models. Most existing works have to suffer the tradeoff between the two by either picking complex black box models such as recurrent neural networks (RNN) or relying on less accurate traditional models with better interpretation such as logistic regression. To address this dilemma, we present REverse Time AttentIoN model (RETAIN) for analyzing Electronic Health Records (EHR) data that achieves high accuracy while remaining clinically interpretable. RETAIN is a two-level neural attention model that can find influential past visits and significant clinical variables within those visits (e.g,. key diagnoses). RETAIN mimics physician practice by attending the EHR data in a reverse time order so that more recent clinical visits will likely get higher attention. Experiments on a large real EHR dataset of 14 million visits from 263K patients over 8 years confirmed the comparable predictive accuracy and computational scalability to the state-of-the-art methods such as RNN. Finally, we demonstrate the clinical interpretation with concrete examples from RETAIN.", "Deep neural networks have proved to be a very effective way to perform classification tasks. They excel when the input data is high dimensional, the relationship between the input and the output is complicated, and the number of labeled training examples is large. But it is hard to explain why a learned network makes a particular classification decision on a particular test case. This is due to their reliance on distributed hierarchical representations. If we could take the knowledge acquired by the neural net and express the same knowledge in a model that relies on hierarchical decisions instead, explaining a particular decision would be much easier. We describe a way of using a trained neural net to create a type of soft decision tree that generalizes better than one learned directly from the training data." ] }
1812.07221
2905053447
To continuously generate trajectories for serial manipulators with high dimensional degrees of freedom (DOF) in the dynamic environment, a real-time optimal trajectory generation method based on machine learning aiming at high dimensional inputs is presented in this paper. First, a learning optimization (LO) framework is established, and implementations with different sub-methods are discussed. Additionally, multiple criteria are defined to evaluate the performance of LO models. Furthermore, aiming at high dimensional inputs, a database generation method based on input space dimension-reducing mapping is proposed. At last, this method is validated on motion planning for haptic feedback manipulators (HFM) in virtual reality systems. Results show that the input space dimension-reducing method can significantly elevate the efficiency and quality of database generation and consequently improve the performance of the LO. Moreover, using this LO method, real-time trajectory generation with high dimensional inputs can be achieved, which lays a foundation for continuous trajectory planning for high-DOF-robots in complex environments.
The traditional point-to-point trajectory planning is started from interpolation-based methods, such as polynomial interpolation @cite_9 @cite_3 and B-spline interpolation @cite_12 @cite_15 . In general, pure interpolation-based methods are able to accomplish required tasks, but difficult to achieve the optimal performance in specific aspects. To obtain optimal trajectories, non-linear optimization problems are constructed with optimal objectives based on time, energy and power consumption and constraints such as mechanical structure, time and obstacle avoidance @cite_7 . Von @cite_2 investigated the non-linear optimization with three separated criteria of minimum time, minimum energy and minimum power consumption and solved it by a numerical method of combining a direct collocation and an indirect multiple shooting method. @cite_13 presented the optimal planning problem tried to find the compromise between time, energy and power consumption and solved it by the Sequential Quadratic Programming (SQP) method. However, all aforementioned optimization-based methods are not real-time due to the complex computation of the non-linear optimization.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_3", "@cite_2", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2142224528", "2168361256", "2163170747", "1495441213", "2920521397", "2068190491", "2017798125" ], "abstract": [ "We present a new optimization-based approach for robotic motion planning among obstacles. Like CHOMP (Covariant Hamiltonian Optimization for Motion Planning), our algorithm can be used to find collision-free trajectories from naA¯ve, straight-line initializations that might be in collision. At the core of our approach are (a) a sequential convex optimization procedure, which penalizes collisions with a hinge loss and increases the penalty coefficients in an outer loop as necessary, and (b) an efficient formulation of the no-collisions constraint that directly considers continuous-time safety Our algorithm is implemented in a software package called TrajOpt. We report results from a series of experiments comparing TrajOpt with CHOMP and randomized planners from OMPL, with regard to planning time and path quality. We consider motion planning for 7 DOF robot arms, 18 DOF full-body robots, statically stable walking motion for the 34 DOF Atlas humanoid robot, and physical experiments with the 18 DOF PR2. We also apply TrajOpt to plan curvature-constrained steerable needle trajectories in the SE(3) configuration space and multiple non-intersecting curved channels within 3D-printed implants for intracavitary brachytherapy. Details, videos, and source code are freely available at: http: rll.berkeley.edu trajopt ijrr.", "This paper presents a minimum-time trajectory planning method and a tracking control scheme for robot manipulators. In the first step, we find the minimum-time trajectories by optimizing cubic polynomial joint trajectories using the evolution strategy. In the second step, by the use of the evolution strategy we tune the sliding mode controller parameters for the robot manipulator to track precisely the trajectories that were found in the previous step. Experimental results show that the proposed method is very useful.", "An online algorithm for computing a robot manipulator's trajectory in joint space with velocity and acceleration constraints is proposed. Our theoretical discussion is based on optimizing the minimum possible time for velocity and acceleration constraints while using cubic splines. A method for calculating the wandering time is also presented. This value of the time gives prehand knowledge to the user about the time after which the wandering phenomenon starts. Simulation results of the proposed algorithm are also presented to show its efficiency.", "Minimum time and minimum energy point-to-point trajectories for an industrial robot of the type Manutec r3 are computed subject to state constraints on the angular velocities. The numerical solutions of these optimal control problems are obtained in an efficient way by a combination of a direct collocation and an indirect multiple shooting method. This combination links the benefits of both approaches: A large domain of convergence and a highly accurate solution. The numerical results show that the constraints on the angular velocities become active during large parts of the time optimal motion. But the resulting stress on the links can be significantly reduced by a minimum energy trajectory that is only about ten percent slower than the minimum time trajectory. As a by-product, the reliability of the direct collocation method in estimating adjoint variables and the efficiency of the combination of direct collocation and multiple shooting is demonstrated. The highly accurate solutions reported in this paper may also serve as benchmark problems for other methods.", "", "We discuss the problem of minimum cost trajectory planning for robotic manipulators. It consists of linking two points in the operational space while minimizing a cost function, taking into account dynamic equations of motion as well as bounds on joint positions, velocities, jerks and torques. This generic optimal control problem is transformed, via a clamped cubic spline model of joint temporal evolutions, into a non-linear constrained optimization problem which is treated then by the Sequential Quadratic Programming (SQP) method. Applications involving grasping mobile object or obstacle avoidance are shown to illustrate the efficiency of the proposed planner.", "The grasping and stabilization of a spinning, noncooperative target satellite by means of a free-flying robot is addressed. A method for computing feasible robot trajectories for grasping a target with known geometry in a useful time is presented, based on nonlinear optimization and a look-up table. An off-line computation provides a data base for a mapping between a four-dimensional input space, to characterize the target motion, and an N-dimensional output space, representing the family of time-parameterized optimal robot trajectories. Simulation results show the effectiveness of the data base for computing grasping maneuvers in a useful time, for a sample range of spinning motions. The debris object consists of a satellite with solar appendages in Low Earth Orbit, which presents collision avoidance and timing challenges for executing the task." ] }
1812.07221
2905053447
To continuously generate trajectories for serial manipulators with high dimensional degrees of freedom (DOF) in the dynamic environment, a real-time optimal trajectory generation method based on machine learning aiming at high dimensional inputs is presented in this paper. First, a learning optimization (LO) framework is established, and implementations with different sub-methods are discussed. Additionally, multiple criteria are defined to evaluate the performance of LO models. Furthermore, aiming at high dimensional inputs, a database generation method based on input space dimension-reducing mapping is proposed. At last, this method is validated on motion planning for haptic feedback manipulators (HFM) in virtual reality systems. Results show that the input space dimension-reducing method can significantly elevate the efficiency and quality of database generation and consequently improve the performance of the LO. Moreover, using this LO method, real-time trajectory generation with high dimensional inputs can be achieved, which lays a foundation for continuous trajectory planning for high-DOF-robots in complex environments.
The non-linear optimization is likely to get stuck into local minimum and generally solved with multiple initial guesses to obtain the global minimum, which is significant costly and hard to operate in real-time. Therefore, quickly seeking for the global minimum is still challenging. A promising idea is learning from former data to reduce the on-line calculation time @cite_20 @cite_21 @cite_14 . Lampariello et at. @cite_17 and @cite_19 solved the non-linear optimization by optimal database generation off-line and regression prediction on-line and applied it in catching flying objects and bipedal walking. However, the methods are limited in specific applications and cases with low dimensional inputs. @cite_10 proposed a trajectory prediction method, employing the previous data to speed up generating new trajectories. In addition, high dimensional inputs are considered and a high dimensional situation descriptor is defined. Hauser @cite_11 extended this method by considering more general context of nonlinear optimization problems and presented a general learning global optima (LGO) framework.
{ "cite_N": [ "@cite_14", "@cite_11", "@cite_21", "@cite_19", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "", "2963891176", "2157937859", "2173568935", "2079148749", "", "2095905067" ], "abstract": [ "", "This paper describes a data-driven framework for approximate global optimization in which precomputed solutions to a sample of problems are retrieved and adapted during online use to solve novel problems. This approach has promise for real-time applications in robotics, since it can produce near globally optimal solutions orders of magnitude faster than standard methods. This paper establishes theoretical conditions on how many and where samples are needed over the space of problems to achieve a given approximation quality. The framework is applied to solve globally optimal collision-free inverse kinematics problems, wherein large solution databases are used to produce near-optimal solutions in a submillisecond time on a standard PC.", "In this paper we introduce the LeGO (Learning for Global Optimization) approach for global optimization in which machine learning is used to predict the outcome of a computationally expensive global optimization run, based upon a suitable training performed by standard runs of the same global optimization method. We propose to use a Support Vector Machine (although different machine learning tools might be employed) to learn the relationship between the starting point of an algorithm and the final outcome (which is usually related to the function value at the point returned by the procedure). Numerical experiments performed both on classical test functions and on difficult space trajectory planning problems show that the proposed approach can be very effective in identifying good starting points for global optimization.", "Control of robot locomotion profits from the use of pre-planned trajectories. This paper presents a way to generalize globally optimal and dynamically consistent trajectories for cyclic bipedal walking. A small task-space consisting of stride-length and step time is mapped to spline parameters which fully define the optimal joint space motion. The paper presents the impact of different machine learning algorithms for velocity and torque optimal trajectories with respect to optimality and feasibility. To demonstrate the usefulness of the trajectories, a control approach is presented that allows general walking including transitions between points in the task-space.", "Trajectory planning and optimization is a fundamental problem in articulated robotics. Algorithms used typically for this problem compute optimal trajectories from scratch in a new situation. In effect, extensive data is accumulated containing situations together with the respective optimized trajectories--but this data is in practice hardly exploited. This article describes a novel method to learn from such data and speed up motion generation, a method we denote tajectory pediction. The main idea is to use demonstrated optimal motions to quickly predict appropriate trajectories for novel situations. These can be used to initialize and thereby drastically speed-up subsequent optimization of robotic movements. Our approach has two essential ingredients. First, to generalize from previous situations to new ones we need a situation descriptor--we construct features for such descriptors and use a sparse regularized feature selection approach to improve generalization. Second, the transfer of previously optimized trajectories to a new situation should not be made in joint angle space--we propose a more efficient task space transfer. We present extensive results in simulation to illustrate the benefits of the new method, and demonstrate it also with real robot hardware. Our experiments in diverse tasks show that we can predict good motion trajectories in new situations for which the refinement is much faster than an optimization from scratch.", "", "Many real-world tasks require fast planning of highly dynamic movements for their execution in real-time. The success often hinges on quickly finding one of the few plans that can achieve the task at all. A further challenge is to quickly find a plan which optimizes a desired cost. In this paper, we will discuss this problem in the context of catching small flying targets efficiently. This can be formulated as a non-linear optimization problem where the desired trajectory is encoded by an adequate parametric representation. The optimizer generates an energy-optimal trajectory by efficiently using the robot kinematic redundancy while taking into account maximal joint motion, collision avoidance and local minima. To enable the resulting method to work in real-time, examples of the global planner are generalized using nearest neighbour approaches, Support Vector Machines and Gaussian process regression, which are compared in this context. Evaluations indicate that the presented method is highly efficient in complex tasks such as ball-catching." ] }
1812.07221
2905053447
To continuously generate trajectories for serial manipulators with high dimensional degrees of freedom (DOF) in the dynamic environment, a real-time optimal trajectory generation method based on machine learning aiming at high dimensional inputs is presented in this paper. First, a learning optimization (LO) framework is established, and implementations with different sub-methods are discussed. Additionally, multiple criteria are defined to evaluate the performance of LO models. Furthermore, aiming at high dimensional inputs, a database generation method based on input space dimension-reducing mapping is proposed. At last, this method is validated on motion planning for haptic feedback manipulators (HFM) in virtual reality systems. Results show that the input space dimension-reducing method can significantly elevate the efficiency and quality of database generation and consequently improve the performance of the LO. Moreover, using this LO method, real-time trajectory generation with high dimensional inputs can be achieved, which lays a foundation for continuous trajectory planning for high-DOF-robots in complex environments.
The database for learning can be obtained by either recording the former data, or artificially generating. @cite_17 evenly chose variables in motion range to generate samples and drew a comparison of databases with different sizes. Hauser @cite_11 uniformly sampled an axis-aligned range of variables to generate a database. Besides, a lifelong learning mode was presented to continuously generate examples with a separate background thread. In addition, the sensitivity of the required database size to the input dimension and the requirement for the database to guarantee the quality of solutions were discussed. However, for the cases with high dimensional inputs, the above database generation methods of randomly or evenly choosing sample variables in the motion range are time-consuming. Thus, finding a way to elevate the efficiency of database generation is a key issue to be solved.
{ "cite_N": [ "@cite_11", "@cite_17" ], "mid": [ "2963891176", "2095905067" ], "abstract": [ "This paper describes a data-driven framework for approximate global optimization in which precomputed solutions to a sample of problems are retrieved and adapted during online use to solve novel problems. This approach has promise for real-time applications in robotics, since it can produce near globally optimal solutions orders of magnitude faster than standard methods. This paper establishes theoretical conditions on how many and where samples are needed over the space of problems to achieve a given approximation quality. The framework is applied to solve globally optimal collision-free inverse kinematics problems, wherein large solution databases are used to produce near-optimal solutions in a submillisecond time on a standard PC.", "Many real-world tasks require fast planning of highly dynamic movements for their execution in real-time. The success often hinges on quickly finding one of the few plans that can achieve the task at all. A further challenge is to quickly find a plan which optimizes a desired cost. In this paper, we will discuss this problem in the context of catching small flying targets efficiently. This can be formulated as a non-linear optimization problem where the desired trajectory is encoded by an adequate parametric representation. The optimizer generates an energy-optimal trajectory by efficiently using the robot kinematic redundancy while taking into account maximal joint motion, collision avoidance and local minima. To enable the resulting method to work in real-time, examples of the global planner are generalized using nearest neighbour approaches, Support Vector Machines and Gaussian process regression, which are compared in this context. Evaluations indicate that the presented method is highly efficient in complex tasks such as ball-catching." ] }
1812.07439
2905463339
Probabilistic programming is a programming paradigm for expressing flexible probabilistic models. Implementations of probabilistic programming languages employ a variety of inference algorithms, where sequential Monte Carlo methods are commonly used. A problem with current state-of-the-art implementations using sequential Monte Carlo inference is the alignment of program synchronization points. We propose a new static analysis approach based on the 0-CFA algorithm for automatically aligning higher-order probabilistic programs. We evaluate the automatic alignment on a phylogenetic model, showing a significant decrease in runtime and increase in accuracy.
Naturally, the work most closely related to ours can be found in papers on universal probabilistic programming languages using smc , such as WebPPL @cite_23 , Anglican @cite_19 , and Birch @cite_7 . Both WebPPL and Anglican are higher-order, functional ppl , while Birch is an imperative, object-oriented ppl . Anglican includes many smc algorithms, including different variations of @cite_19 . Anglican also includes various mcmc methods. WebPPL includes fewer inference algorithms, but both smc and mcmc methods are available. Birch performs smc inference in combination with using closed-form optimizations at runtime, automatically yielding a more optimized version of smc taking advantage of and . None of the languages above, however, address the alignment issue presented in this article. In essence, the programmer needs to be aware of the internals of the smc inference algorithm to write efficient models---. Optimally, we would like the model and the inference to be as independent as possible. This is the goal of the work in this paper.
{ "cite_N": [ "@cite_19", "@cite_7", "@cite_23" ], "mid": [ "2950113781", "2745919252", "" ], "abstract": [ "We introduce and demonstrate a new approach to inference in expressive probabilistic programming languages based on particle Markov chain Monte Carlo. Our approach is simple to implement and easy to parallelize. It applies to Turing-complete probabilistic programming languages and supports accurate inference in models that make use of complex control flow, including stochastic recursion. It also includes primitives from Bayesian nonparametric statistics. Our experiments show that this approach can be more efficient than previously introduced single-site Metropolis-Hastings methods.", "We introduce a dynamic mechanism for the solution of analytically-tractable substructure in probabilistic programs, using conjugate priors and affine transformations to reduce variance in Monte Carlo estimators. For inference with Sequential Monte Carlo, this automatically yields improvements such as locally-optimal proposals and Rao-Blackwellization. The mechanism maintains a directed graph alongside the running program that evolves dynamically as operations are triggered upon it. Nodes of the graph represent random variables, edges the analytically-tractable relationships between them. Random variables remain in the graph for as long as possible, to be sampled only when they are used by the program in a way that cannot be resolved analytically. In the meantime, they are conditioned on as many observations as possible. We demonstrate the mechanism with a few pedagogical examples, as well as a linear-nonlinear state-space model with simulated data, and an epidemiological model with real data of a dengue outbreak in Micronesia. In all cases one or more variables are automatically marginalized out to significantly reduce variance in estimates of the marginal likelihood, in the final case facilitating a random-weight or pseudo-marginal-type importance sampler for parameter estimation. We have implemented the approach in Anglican and a new probabilistic programming language called Birch.", "" ] }
1812.07439
2905463339
Probabilistic programming is a programming paradigm for expressing flexible probabilistic models. Implementations of probabilistic programming languages employ a variety of inference algorithms, where sequential Monte Carlo methods are commonly used. A problem with current state-of-the-art implementations using sequential Monte Carlo inference is the alignment of program synchronization points. We propose a new static analysis approach based on the 0-CFA algorithm for automatically aligning higher-order probabilistic programs. We evaluate the automatic alignment on a phylogenetic model, showing a significant decrease in runtime and increase in accuracy.
There also exists more theoretical work on smc for probabilistic programming. One example is a recent denotational validation of smc in probabilistic programming given by @cite_11 . This work also includes a denotational validation of , another common inference algorithm for ppl . Trace mcmc has also been proven correct by Borgström et. al. @cite_8 through an operational semantics for a probabilistic untyped lambda calculus.
{ "cite_N": [ "@cite_8", "@cite_11" ], "mid": [ "2949804971", "2767781532" ], "abstract": [ "We develop the operational semantics of an untyped probabilistic lambda-calculus with continuous distributions, as a foundation for universal probabilistic programming languages such as Church, Anglican, and Venture. Our first contribution is to adapt the classic operational semantics of lambda-calculus to a continuous setting via creating a measure space on terms and defining step-indexed approximations. We prove equivalence of big-step and small-step formulations of this distribution-based semantics. To move closer to inference techniques, we also define the sampling-based semantics of a term as a function from a trace of random samples to a value. We show that the distribution induced by integrating over all traces equals the distribution-based semantics. Our second contribution is to formalize the implementation technique of trace Markov chain Monte Carlo (MCMC) for our calculus and to show its correctness. A key step is defining sufficient conditions for the distribution induced by trace MCMC to converge to the distribution-based semantics. To the best of our knowledge, this is the first rigorous correctness proof for trace MCMC for a higher-order functional language.", "We present a modular semantic account of Bayesian inference algorithms for probabilistic programming languages, as used in data science and machine learning. Sophisticated inference algorithms are often explained in terms of composition of smaller parts. However, neither their theoretical justification nor their implementation reflects this modularity. We show how to conceptualise and analyse such inference algorithms as manipulating intermediate representations of probabilistic programs using higher-order functions and inductive types, and their denotational semantics. Semantic accounts of continuous distributions use measurable spaces. However, our use of higher-order functions presents a substantial technical difficulty: it is impossible to define a measurable space structure over the collection of measurable functions between arbitrary measurable spaces that is compatible with standard operations on those functions, such as function application. We overcome this difficulty using quasi-Borel spaces, a recently proposed mathematical structure that supports both function spaces and continuous distributions. We define a class of semantic structures for representing probabilistic programs, and semantic validity criteria for transformations of these representations in terms of distribution preservation. We develop a collection of building blocks for composing representations. We use these building blocks to validate common inference algorithms such as Sequential Monte Carlo and Markov Chain Monte Carlo. To emphasize the connection between the semantic manipulation and its traditional measure theoretic origins, we use Kock's synthetic measure theory. We demonstrate its usefulness by proving a quasi-Borel counterpart to the Metropolis-Hastings-Green theorem." ] }
1812.07170
2905288168
Bug fixing is generally a manually-intensive task. However, recent work has proposed the idea of automated program repair, which aims to repair (at least a subset of) bugs in different ways such as code mutation, etc. Following in the same line of work as automated bug repair, in this paper we aim to leverage past fixes to propose fixes of current future bugs. Specifically, we propose Ratchet, a corrective patch generation system using neural machine translation. By learning corresponding pre-correction and post-correction code in past fixes with a neural sequence-to-sequence model, Ratchet is able to generate a fix code for a given bug-prone code query. We perform an empirical study with five open source projects, namely Ambari, Camel, Hadoop, Jetty and Wicket, to evaluate the effectiveness of Ratchet. Our findings show that Ratchet can generate syntactically valid statements 98.7 of the time, and achieve an F1-measure between 0.41-0.83 with respect to the actual fixes adopted in the code base. In addition, we perform a qualitative validation using 20 participants to see whether the generated statements can be helpful in correcting bugs. Our survey showed that Ratchet's output was considered to be helpful in fixing the bugs on many occasions, even if fix was not 100 correct.
There are several studies on probabilistic machine learning models of source code for different applications using different techniques. Allamanis conducted a large survey on this topic @cite_4 . Table is originally presented in the survey of representative code models @cite_4 . From the original table, non-refereed papers are excluded, some missing papers are added, and the column Data is newly prepared, which summarizes analyzed data in terms of programing languages, data sources, and historical information.
{ "cite_N": [ "@cite_4" ], "mid": [ "2963935794" ], "abstract": [ "Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit the abundance of patterns of code. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities." ] }
1812.07170
2905288168
Bug fixing is generally a manually-intensive task. However, recent work has proposed the idea of automated program repair, which aims to repair (at least a subset of) bugs in different ways such as code mutation, etc. Following in the same line of work as automated bug repair, in this paper we aim to leverage past fixes to propose fixes of current future bugs. Specifically, we propose Ratchet, a corrective patch generation system using neural machine translation. By learning corresponding pre-correction and post-correction code in past fixes with a neural sequence-to-sequence model, Ratchet is able to generate a fix code for a given bug-prone code query. We perform an empirical study with five open source projects, namely Ambari, Camel, Hadoop, Jetty and Wicket, to evaluate the effectiveness of Ratchet. Our findings show that Ratchet can generate syntactically valid statements 98.7 of the time, and achieve an F1-measure between 0.41-0.83 with respect to the actual fixes adopted in the code base. In addition, we perform a qualitative validation using 20 participants to see whether the generated statements can be helpful in correcting bugs. Our survey showed that Ratchet's output was considered to be helpful in fixing the bugs on many occasions, even if fix was not 100 correct.
From the data column, we see that several programing languages have been studied including Java, C, C #, JavaScript, Python, among others. Although most of studies collected data from code repositories, some used other data sources, for example, programs in TopCorder.com @cite_73 , Microsoft Excel help forums @cite_82 , Android programming tutorial videos @cite_87 , to build probabilistic models of source code. From source code repositories, collecting source code in selected snapshots is a common procedure. However, when considering software evolution, that is, software is updated continuously, learning over long periods is more practical. As discussed in , online machine learning is one of challenges in this scenario. Previous studies demonstrated learning methods in long periods, called training on errors @cite_17 @cite_66 . This can be a good hint for future research on online machine learning of patch generation.
{ "cite_N": [ "@cite_82", "@cite_87", "@cite_73", "@cite_66", "@cite_17" ], "mid": [ "2560790486", "2532717157", "2962725091", "2162045124", "2046671629" ], "abstract": [ "", "The number of programming tutorial videos on the web increases daily. Video hosting sites such as YouTube host millions of video lectures, with many programming tutorials for various languages and platforms. These videos contain a wealth of valuable information, including code that may be of interest. However, two main challenges have so far prevented the effective indexing of programming tutorial videos: (i) code in tutorials is typically written on-the-fly, with only parts of the code visible in each frame, and (ii) optical character recognition (OCR) is not precise enough to produce quality results from videos. We present a novel approach for extracting code from videos that is based on: (i) consolidating code across frames, and (ii) statistical language models for applying corrections at different levels, allowing us to make corrections by choosing the most likely token, combination of tokens that form a likely line structure, and combination of lines that lead to a likely code fragment in a particular language. We implemented our approach in a tool called ACE, and used it to extract code from 40 Android video tutorials on YouTube. Our evaluation shows that ACE extracts code with high accuracy, enabling deep indexing of video tutorials.", "We study the problem of building generative models of natural source code (NSC); that is, source code written by humans and meant to be understood by humans. Our primary contribution is to describe new generative models that are tailored to NSC. The models are based on probabilistic context free grammars (PCFGs) and neuro-probabilistic language models (Mnih & Teh, 2012), which are extended to incorporate additional source code-specific structure. These models can be efficiently trained on a corpus of source code and outperform a variety of less structured baselines in terms of predictive log likelihoods on held-out data.", "Fault-prone module detection in source code is important for assurance of software quality. Most previous fault-prone detection approaches have been based on software metrics. Such approaches, however, have difficulties in collecting the metrics and in constructing mathematical models based on the metrics. To mitigate such difficulties, we have proposed a novel approach for detecting fault-prone modules using a spam-filtering technique, named Fault-Prone Filtering. In our approach, fault-prone modules are detected in such a way that the source code modules are considered as text files and are applied to the spam filter directly. In practice, we use the training only errors procedure and apply this procedure to fault-prone. Since no pre-training is required, this procedure can be applied to an actual development field immediately. This paper describes an extension of the training only errors procedures. We introduce a precise unit of training, \"modified lines of code,\" instead of methods. In addition, we introduce the dynamic threshold for classification. The result of the experiment shows that our extension leads to twice the precision with about the same recall, and improves 15 on the best F1 measurement.", "The fault-prone module detection in source code is of importance for assurance of software quality. Most of previous fault-prone detection approaches are based on software metrics. Such approaches, however, have difficulties in collecting the metrics and constructing mathematical models based on the metrics. In order to mitigate such difficulties, we propose a novel approach for detecting fault-prone modules using a spam filtering technique, named Fault-Prone Filtering. Because of the increase of needs for spam e-mail detection, the spam filtering technique has been progressed as a convenient and effective technique for text mining. In our approach, fault-prone modules are detected in a way that the source code modules are considered as text files and are applied to the spam filter directly. This paper describes the training on errors procedure to apply fault-prone filtering in practice. Since no pre-training is required, this procedure can be applied to actual development field immediately. In order to show the usefulness of our approach, we conducted an experiment using a large source code repository of Java based open source project. The result of experiment shows that our approach can classify about 85 of software modules correctly. The result also indicates that fault-prone modules can be detected relatively low cost at an early stage." ] }
1812.07124
2952911521
We propose a novel semi-supervised, Multi-Level Sequential Generative Adversarial Network (MLS-GAN) architecture for group activity recognition. In contrast to previous works which utilise manually annotated individual human action predictions, we allow the models to learn it's own internal representations to discover pertinent sub-activities that aid the final group activity recognition task. The generator is fed with person-level and scene-level features that are mapped temporally through LSTM networks. Action-based feature fusion is performed through novel gated fusion units that are able to consider long-term dependencies, exploring the relationships among all individual actions, to learn an intermediate representation or action code' for the current group activity. The network achieves its semi-supervised behaviour by allowing it to perform group action classification together with the adversarial real fake validation. We perform extensive evaluations on different architectural variants to demonstrate the importance of the proposed architecture. Furthermore, we show that utilising both person-level and scene-level features facilitates the group activity prediction better than using only person-level features. Our proposed architecture outperforms current state-of-the-art results for sports and pedestrian based classification tasks on Volleyball and Collective Activity datasets, showing it's flexible nature for effective learning of group activities.
Some early works @cite_6 @cite_5 @cite_29 @cite_28 on group activity recognition have addressed the group activity recognition task on surveillance and sports video datasets with probabilistic and discriminative models that utilise hand-crafted features. As these hand-crafted feature based methods always require feature engineering, attention has shifted towards deep network based methods due to their automatic feature learning capability.
{ "cite_N": [ "@cite_28", "@cite_5", "@cite_29", "@cite_6" ], "mid": [ "1989004008", "2053619738", "2057067088", "" ], "abstract": [ "We deal with the problem of recognizing social roles played by people in an event. Social roles are governed by human interactions, and form a fundamental component of human event description. We focus on a weakly supervised setting, where we are provided different videos belonging to an event class, without training role labels. Since social roles are described by the interaction between people in an event, we propose a Conditional Random Field to model the inter-role interactions, along with person specific social descriptors. We develop tractable variational inference to simultaneously infer model weights, as well as role assignment to all people in the videos. We also present a novel YouTube social roles dataset with ground truth role annotations, and introduce annotations on a subset of videos from the TRECVID-MED11 [1] event kits for evaluation purposes. The performance of the model is compared against different baseline methods on these datasets.", "In this paper we present a new framework for pedestrian action categorization. Our method enables the classification of actions whose semantic can be only analyzed by looking at the collective behavior of pedestrians in the scene. Examples of these actions are waiting by a street intersection versus standing in a queue. To that end, we exploit the spatial distribution of pedestrians in the scene as well as their pose and motion for achieving robust action classification. Our proposed solution employs extended Kalman filtering for tracking of detected pedestrians in 2D 1 2 scene coordinates as well as camera parameter and horizon estimation for tracker filtering and stabilization. We present a local spatio-temporal descriptor effective in capturing the spatial distribution of pedestrians over time as well as their pose. This descriptor captures pedestrian activity while requiring no high level scene understanding. Our work is tested against highly challenging real world pedestrian video sequences captured by low resolution hand held cameras. Experimental results on a 5-class action dataset indicate that our solution: i) is effective in classifying collective pedestrian activities; ii) is tolerant to challenging real world conditions such as variation in illumination, scale, viewpoint as well as partial occlusion and background motion; iii) outperforms state-of-the art action classification techniques.", "We present a hierarchical model for human activity recognition in entire multi-person scenes. Our model describes human behaviour at multiple levels of detail, ranging from low-level actions through to high-level events. We also include a model of social roles, the expected behaviours of certain people, or groups of people, in a scene. The hierarchical model includes these varied representations, and various forms of interactions between people present in a scene. The model is trained in a discriminative max-margin framework. Experimental results demonstrate that this model can improve performance at all considered levels of detail, on two challenging datasets.", "" ] }
1812.07166
2904646808
Early diagnosis of pulmonary nodules (PNs) can improve the survival rate of patients and yet is a challenging task for radiologists due to the image noise and artifacts in computed tomography (CT) images. In this paper, we propose a novel and effective abnormality detector implementing the attention mechanism and group convolution on 3D single-shot detector (SSD) called group-attention SSD (GA-SSD). We find that group convolution is effective in extracting rich context information between continuous slices, and attention network can learn the target features automatically. We collected a large-scale dataset that contained 4146 CT scans with annotations of varying types and sizes of PNs (even PNs smaller than 3mm were annotated). To the best of our knowledge, this dataset is the largest cohort with relatively complete annotations for PNs detection. Our experimental results show that the proposed group-attention SSD outperforms the classic SSD framework as well as the state-of-the-art 3DCNN, especially on some challenging lesion types.
Recent object detection models can be grouped into one of two types @cite_32 , two-stage approaches @cite_16 @cite_23 @cite_1 and one-stage methods @cite_2 @cite_20 . The former generates a series of candidate boxes as proposals by the algorithm, and then classifies the proposals by convolution neural network. The latter directly transforms the problem of target border location into a regression problem without generating candidate boxes. It is precisely because of the difference between the two methods, the former is superior in detection accuracy and location accuracy, and the latter is superior in algorithm speed.
{ "cite_N": [ "@cite_1", "@cite_32", "@cite_23", "@cite_2", "@cite_16", "@cite_20" ], "mid": [ "2613718673", "2890715498", "", "2963037989", "2102605133", "2193145675" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Object detection, one of the most fundamental and challenging problems in computer vision, seeks to locate object instances from a large number of predefined categories in natural images. Deep learning techniques have emerged as a powerful strategy for learning feature representations directly from data and have led to remarkable breakthroughs in the field of generic object detection. Given this period of rapid evolution, the goal of this paper is to provide a comprehensive survey of the recent achievements in this field brought about by deep learning techniques. More than 300 research contributions are included in this survey, covering many aspects of generic object detection: detection frameworks, object feature representation, object proposal generation, context modeling, training strategies, and evaluation metrics. We finish the survey by identifying promising directions for future research.", "", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd." ] }
1812.07166
2904646808
Early diagnosis of pulmonary nodules (PNs) can improve the survival rate of patients and yet is a challenging task for radiologists due to the image noise and artifacts in computed tomography (CT) images. In this paper, we propose a novel and effective abnormality detector implementing the attention mechanism and group convolution on 3D single-shot detector (SSD) called group-attention SSD (GA-SSD). We find that group convolution is effective in extracting rich context information between continuous slices, and attention network can learn the target features automatically. We collected a large-scale dataset that contained 4146 CT scans with annotations of varying types and sizes of PNs (even PNs smaller than 3mm were annotated). To the best of our knowledge, this dataset is the largest cohort with relatively complete annotations for PNs detection. Our experimental results show that the proposed group-attention SSD outperforms the classic SSD framework as well as the state-of-the-art 3DCNN, especially on some challenging lesion types.
The inspiration of attention mechanism comes from the mechanism of human visual attention. Human vision is guided by attention which gives higher weights on objects than background. Recently, attention mechanism has been successfully applied in NLP @cite_10 @cite_7 @cite_11 @cite_28 @cite_27 as well as computer vision @cite_22 @cite_12 @cite_18 . Most of the conventional methods which solve the generic object detection problems neglect the correlation between proposed regions. The Non-local Network @cite_19 and the Relation networks @cite_4 were translational variants of the attention mechanism and utilize the interrelationships between objects. Our method is motivated by these works, aiming at medical images, to find the inter-correlation between CT slices and between lung nodule pixels.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_7", "@cite_28", "@cite_19", "@cite_27", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2952011421", "2964080601", "2737725206", "2950635152", "2584017349", "", "2953150860", "2963403868", "2773003563", "2130942839" ], "abstract": [ "Attention-based learning for fine-grained image recognition remains a challenging task, where most of the existing methods treat each object part in isolation, while neglecting the correlations among them. In addition, the multi-stage or multi-scale mechanisms involved make the existing methods less efficient and hard to be trained end-to-end. In this paper, we propose a novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images. Our method first learns multiple attention region features of each input image through the one-squeeze multi-excitation (OSME) module, and then apply the multi-attention multi-class constraint (MAMC) in a metric learning framework. For each anchor feature, the MAMC functions by pulling same-attention same-class features closer, while pushing different-attention or different-class features away. Our method can be easily trained end-to-end, and is highly efficient which requires only one training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog species dataset that surpasses similar existing datasets by category coverage, data volume and annotation quality. This dataset will be released upon acceptance to facilitate the research of fine-grained image recognition. Extensive experiments are conducted to show the substantial improvements of our method on four benchmark datasets.", "Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era. All state-of-the-art object detection systems still rely on recognizing object instances individually, without exploiting their relations during learning. This work proposes an object relation module. It processes a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. It is lightweight and in-place. It does not require additional supervision and is easy to embed in existing networks. It is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline. It verifies the efficacy of modeling object relations in CNN based detection. It gives rise to the first fully end-to-end object detector.", "Recognizing fine-grained categories (e.g., bird species) is difficult due to the challenges of discriminative region localization and fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that region detection and fine-grained feature learning are mutually correlated and thus can reinforce each other. In this paper, we propose a novel recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way. The learning at each scale consists of a classification sub-network and an attention proposal sub-network (APN). The APN starts from full images, and iteratively generates region attention from coarse to fine by taking previous prediction as a reference, while the finer scale network takes as input an amplified attended region from previous scale in a recurrent way. The proposed RA-CNN is optimized by an intra-scale classification loss and an inter-scale ranking loss, to mutually learn accurate region attention and fine-grained representation. RA-CNN does not need bounding box part annotations and can be trained end-to-end. We conduct comprehensive experiments and show that RA-CNN achieves the best performance in three fine-grained tasks, with relative accuracy gains of 3.3 , 3.7 , 3.8 , on CUB Birds, Stanford Dogs and Stanford Cars, respectively.", "In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95 at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC-IDRI data. We released this set of additional nodules for further development of CAD systems.", "", "How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence's representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNN; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNN achieves state-of-the-art performance on AS, PI and TE tasks.", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.", "Recognizing fine-grained categories (e.g., bird species) highly relies on discriminative part localization and part-based fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that part localization (e.g., head of a bird) and fine-grained feature learning (e.g., head shape) are mutually correlated. In this paper, we propose a novel part learning approach by a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other. MA-CNN consists of convolution, channel grouping and part classification sub-networks. The channel grouping network takes as input feature channels from convolutional layers, and generates multiple parts by clustering, weighting and pooling from spatially-correlated channels. The part classification network further classifies an image by each individual part, through which more discriminative fine-grained features can be learned. Two losses are proposed to guide the multi-task learning of channel grouping and part classification, which encourages MA-CNN to generate more discriminative parts from feature channels and learn better fine-grained features from parts in a mutual reinforced way. MA-CNN does not need bounding box part annotation and can be trained end-to-end. We incorporate the learned parts from MA-CNN with part-CNN for recognition, and show the best performances on three challenging published fine-grained datasets, e.g., CUB-Birds, FGVC-Aircraft and Stanford-Cars.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier." ] }
1812.07166
2904646808
Early diagnosis of pulmonary nodules (PNs) can improve the survival rate of patients and yet is a challenging task for radiologists due to the image noise and artifacts in computed tomography (CT) images. In this paper, we propose a novel and effective abnormality detector implementing the attention mechanism and group convolution on 3D single-shot detector (SSD) called group-attention SSD (GA-SSD). We find that group convolution is effective in extracting rich context information between continuous slices, and attention network can learn the target features automatically. We collected a large-scale dataset that contained 4146 CT scans with annotations of varying types and sizes of PNs (even PNs smaller than 3mm were annotated). To the best of our knowledge, this dataset is the largest cohort with relatively complete annotations for PNs detection. Our experimental results show that the proposed group-attention SSD outperforms the classic SSD framework as well as the state-of-the-art 3DCNN, especially on some challenging lesion types.
Group convolution first appeared in AlexNet @cite_0 . To solve the problem of insufficient memory, AlexNet proposed that the group convolution approach could increase the diagonal correlation between filters and reduce the training parameters. Recently, many successful applications have proved the effectiveness of group convolution module such as channel-wise convolution including the Xception @cite_15 @cite_24 (Extreme Inception) and the ResNeXt @cite_29 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_29", "@cite_24" ], "mid": [ "", "2097117768", "2549139847", "2183341477" ], "abstract": [ "", "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call cardinality (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.", "Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2 top-1 and 5:6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5 top-5 error and 17:3 top-1 error on the validation set and 3:6 top-5 error on the official test set." ] }
1812.07134
2904473683
This paper tackles high-dynamic-range (HDR) image reconstruction given only a single low-dynamic-range (LDR) image as input. While the existing methods focus on minimizing the mean-squared-error (MSE) between the target and reconstructed images, we minimize a hybrid loss that consists of perceptual and adversarial losses in addition to HDR-reconstruction loss. The reconstruction loss instead of MSE is more suitable for HDR since it puts more weight on both over- and under- exposed areas. It makes the reconstruction faithful to the input. Perceptual loss enables the networks to utilize knowledge about objects and image structure for recovering the intensity gradients of saturated and grossly quantized areas. Adversarial loss helps to select the most plausible appearance from multiple solutions. The hybrid loss that combines all the three losses is calculated in logarithmic space of image intensity so that the outputs retain a large dynamic range and meanwhile the learning becomes tractable. Comparative experiments conducted with other state-of-the-art methods demonstrated that our method produces a leap in image quality.
Conventionally, HDR reconstruction has been performed by non-learning-based brightness enhancement through filtering or light-source detection. For example, bilateral filters applied to @math - @math -range three-dimensional grids work as brightness enhancement functions @cite_32 @cite_14 . However, non-learning-based approaches cannot estimate physically accurate amounts of light due to the lack of knowledge about real HDR images; thus, the quality of the estimated HDR images is limited.
{ "cite_N": [ "@cite_14", "@cite_32" ], "mid": [ "2008675431", "2073095526" ], "abstract": [ "Reverse-tone-mapping operators (rTMOs) enhance low-dynamic-range images and videos for display on high-dynamic-range monitors. A common problem faced by previous rTMOs is the handling of under or overexposed content. Under such conditions, they may not be effective, and even cause loss and reversal of visible contrast. We present an rTMO based on cross-bilateral filtering that generates high-quality HDR images and videos for a wide range of exposures. Experiments performed using an objective image quality metric show that our approach is the only technique available that can gracefully enhance perceived details across a large range of image exposures.", "This paper presents an automatic technique for producing high-quality brightness-enhancement functions for real-time reverse tone mapping of images and videos. Our approach uses a bilateral filter to obtain smooth results while preserving sharp luminance discontinuities, and can be efficiently implemented on GPUs. We demonstrate the effectiveness of our approach by reverse tone mapping several images and videos. Experiments based on HDR visible difference predicator and on an image distortion metric indicate that the results produced by our method are less prone to visible artifacts than the ones obtained with the state-of-the-art technique for real-time automatic computation of brightness enhancement functions." ] }
1812.07134
2904473683
This paper tackles high-dynamic-range (HDR) image reconstruction given only a single low-dynamic-range (LDR) image as input. While the existing methods focus on minimizing the mean-squared-error (MSE) between the target and reconstructed images, we minimize a hybrid loss that consists of perceptual and adversarial losses in addition to HDR-reconstruction loss. The reconstruction loss instead of MSE is more suitable for HDR since it puts more weight on both over- and under- exposed areas. It makes the reconstruction faithful to the input. Perceptual loss enables the networks to utilize knowledge about objects and image structure for recovering the intensity gradients of saturated and grossly quantized areas. Adversarial loss helps to select the most plausible appearance from multiple solutions. The hybrid loss that combines all the three losses is calculated in logarithmic space of image intensity so that the outputs retain a large dynamic range and meanwhile the learning becomes tractable. Comparative experiments conducted with other state-of-the-art methods demonstrated that our method produces a leap in image quality.
An example of a multi-step methods is Deep Reverse Tone Mapping (DrTMO) @cite_10 , which generates multiple images with different exposures using an encoder-decoder network @cite_25 @cite_38 . To train the network, LDR images are simulated using various camera curves @cite_8 from an HDR image dataset and input. ChainHDRI @cite_35 and RecursiveHDRI @cite_15 are similar to DrTMO @cite_10 , the difference being that they recurrently generate higher or lower exposure images from images generated in the previous time steps. However, such recurrent methods need multiple forward computations in one HDR generation; in contrast, ours can generate HDR images in one forward pass.
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_8", "@cite_15", "@cite_10", "@cite_25" ], "mid": [ "2025768430", "2963300898", "2096099694", "2894939846", "2769930525", "2100495367" ], "abstract": [ "Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.", "Recently, high dynamic range (HDR) imaging has attracted much attention as a technology to reflect human visual characteristics owing to the development of the display and camera technology. This paper proposes a novel deep neural network model that reconstructs an HDR image from a single low dynamic range (LDR) image. The proposed model is based on a convolutional neural network composed of dilated convolutional layers and infers LDR images with various exposures and illumination from a single LDR image of the same scene. Then, the final HDR image can be formed by merging these inference results. It is relatively simple for the proposed method to find the mapping between the LDR and an HDR with a different bit depth because of the chaining structure inferring the relationship between the LDR images with brighter (or darker) exposures from a given LDR image. The method not only extends the range but also has the advantage of restoring the light information of the actual physical world. The proposed method is an end-to-end reconstruction process, and it has the advantage of being able to easily combine a network to extend an additional range. In the experimental results, the proposed method shows quantitative and qualitative improvement in performance, compared with the conventional algorithms.", "Many vision applications require precise measurement of scene radiance. The function relating scene radiance to image brightness is called the camera response. We analyze the properties that all camera responses share. This allows us to find the constraints that any response function must satisfy. These constraints determine the theoretical space of all possible camera responses. We have collected a diverse database of real-world camera response functions (DoRF). Using this database we show that real-world responses occupy a small part of the theoretical space of all possible responses. We combine the constraints from our theoretical space with the data from DoRF to create a low-parameter Empirical Model of Response (EMoR). This response model allows us to accurately interpolate the complete response function of a camera from a small number of measurements obtained using a standard chart. We also show that the model can be used to accurately estimate the camera response from images of an arbitrary scene taken using different exposures. The DoRF database and the EMoR model can be downloaded at http: www.cs.columbia.edu CAVE.", "High dynamic range images contain luminance information of the physical world and provide more realistic experience than conventional low dynamic range images. Because most images have a low dynamic range, recovering the lost dynamic range from a single low dynamic range image is still prevalent. We propose a novel method for restoring the lost dynamic range from a single low dynamic range image through a deep neural network. The proposed method is the first framework to create high dynamic range images based on the estimated multi-exposure stack using the conditional generative adversarial network structure. In this architecture, we train the network by setting an objective function that is a combination of L1 loss and generative adversarial network loss. In addition, this architecture has a simplified structure than the existing networks. In the experimental results, the proposed network generated a multi-exposure stack consisting of realistic images with varying exposure values while avoiding artifacts on public benchmarks, compared with the existing methods. In addition, both the multi-exposure stacks and high dynamic range images estimated by the proposed method are significantly similar to the ground truth than other state-of-the-art algorithms.", "Inferring a high dynamic range (HDR) image from a single low dynamic range (LDR) input is an ill-posed problem where we must compensate lost data caused by under- over-exposure and color quantization. To tackle this, we propose the first deep-learning-based approach for fully automatic inference using convolutional neural networks. Because a naive way of directly inferring a 32-bit HDR image from an 8-bit LDR image is intractable due to the difficulty of training, we take an indirect approach; the key idea of our method is to synthesize LDR images taken with different exposures (i.e., bracketed images) based on supervised learning, and then reconstruct an HDR image by merging them. By learning the relative changes of pixel values due to increased decreased exposures using 3D deconvolutional networks, our method can reproduce not only natural tones without introducing visible noise but also the colors of saturated pixels. We demonstrate the effectiveness of our method by comparing our results not only with those of conventional methods but also with ground-truth HDR images.", "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data." ] }
1812.07134
2904473683
This paper tackles high-dynamic-range (HDR) image reconstruction given only a single low-dynamic-range (LDR) image as input. While the existing methods focus on minimizing the mean-squared-error (MSE) between the target and reconstructed images, we minimize a hybrid loss that consists of perceptual and adversarial losses in addition to HDR-reconstruction loss. The reconstruction loss instead of MSE is more suitable for HDR since it puts more weight on both over- and under- exposed areas. It makes the reconstruction faithful to the input. Perceptual loss enables the networks to utilize knowledge about objects and image structure for recovering the intensity gradients of saturated and grossly quantized areas. Adversarial loss helps to select the most plausible appearance from multiple solutions. The hybrid loss that combines all the three losses is calculated in logarithmic space of image intensity so that the outputs retain a large dynamic range and meanwhile the learning becomes tractable. Comparative experiments conducted with other state-of-the-art methods demonstrated that our method produces a leap in image quality.
With the growing popularity of end-to-end learning, single-step networks that directly estimate the desired HDR images may be preferable to multi-step methods. HDR-CNN @cite_31 and Deep Reciprocating HDR @cite_29 share the same encoder-decoder structure that directly generates an HDR image from an LDR image. While the architecture itself is similar to UNet for segmentation @cite_34 , they train networks to recover from over- under-exposures of moderate extent that are artificially added to the training LDR images. ExpandNet @cite_6 has a three-branch architecture designed for single-step HDR image generation, and the branches are for global, semi-local, and local feature extraction. In contrast, we show that the simple encoder-decoder architecture performs well with our augmented loss functions.
{ "cite_N": [ "@cite_31", "@cite_29", "@cite_6", "@cite_34" ], "mid": [ "2766497195", "2797721596", "2964076515", "2952232639" ], "abstract": [ "Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.", "Image correction aims to adjust an input image into a visually pleasing one. Existing approaches are proposed mainly from the perspective of image pixel manipulation. They are not effective to recover the details in the under over exposed regions. In this paper, we revisit the image formation procedure and notice that the missing details in these regions exist in the corresponding high dynamic range (HDR) data. These details are well perceived by the human eyes but diminished in the low dynamic range (LDR) domain because of the tone mapping process. Therefore, we formulate the image correction task as an HDR transformation process and propose a novel approach called Deep Reciprocating HDR Transformation (DRHT). Given an input LDR image, we first reconstruct the missing details in the HDR domain. We then perform tone mapping on the predicted HDR data to generate the output LDR image with the recovered details. To this end, we propose a united framework consisting of two CNNs for HDR reconstruction and tone mapping. They are integrated end-to-end for joint training and prediction. Experiments on the standard benchmarks demonstrate that the proposed method performs favorably against state-of-the-art image correction methods.", "High dynamic range (HDR) imaging provides the capability of handling real world lighting as opposed to the traditional low dynamic range (LDR) which struggles to accurately represent images with higher dynamic range. However, most imaging content is still available only in LDR. This paper presents a method for generating HDR content from LDR content based on deep Convolutional Neural Networks (CNNs) termed ExpandNet. ExpandNet accepts LDR images as input and generates images with an expanded range in an end-to-end fashion. The model attempts to reconstruct missing information that was lost from the original signal due to quantization, clipping, tone mapping or gamma correction. The added information is reconstructed from learned features, as the network is trained in a supervised fashion using a dataset of HDR images. The approach is fully automatic and data driven; it does not require any heuristics or human expertise. ExpandNet uses a multiscale architecture which avoids the use of upsampling layers to improve image quality. The method performs well compared to expansion inverse tone mapping operators quantitatively on multiple metrics, even for badly exposed inputs.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL ." ] }
1812.07134
2904473683
This paper tackles high-dynamic-range (HDR) image reconstruction given only a single low-dynamic-range (LDR) image as input. While the existing methods focus on minimizing the mean-squared-error (MSE) between the target and reconstructed images, we minimize a hybrid loss that consists of perceptual and adversarial losses in addition to HDR-reconstruction loss. The reconstruction loss instead of MSE is more suitable for HDR since it puts more weight on both over- and under- exposed areas. It makes the reconstruction faithful to the input. Perceptual loss enables the networks to utilize knowledge about objects and image structure for recovering the intensity gradients of saturated and grossly quantized areas. Adversarial loss helps to select the most plausible appearance from multiple solutions. The hybrid loss that combines all the three losses is calculated in logarithmic space of image intensity so that the outputs retain a large dynamic range and meanwhile the learning becomes tractable. Comparative experiments conducted with other state-of-the-art methods demonstrated that our method produces a leap in image quality.
GANs have already been used for HDR image generation, by Lee al @cite_15 and Ning al @cite_12 . By introducing GAN, the restoration quality is further improved than the simple encoder-decoder networks . We found that GAN combined with reconstructive error still generates blur or unnatural artifacts. In this paper, by further introducing perceptual loss and reconstruction loss optimized for HDR, the image quality can be improved.
{ "cite_N": [ "@cite_15", "@cite_12" ], "mid": [ "2894939846", "2799090635" ], "abstract": [ "High dynamic range images contain luminance information of the physical world and provide more realistic experience than conventional low dynamic range images. Because most images have a low dynamic range, recovering the lost dynamic range from a single low dynamic range image is still prevalent. We propose a novel method for restoring the lost dynamic range from a single low dynamic range image through a deep neural network. The proposed method is the first framework to create high dynamic range images based on the estimated multi-exposure stack using the conditional generative adversarial network structure. In this architecture, we train the network by setting an objective function that is a combination of L1 loss and generative adversarial network loss. In addition, this architecture has a simplified structure than the existing networks. In the experimental results, the proposed network generated a multi-exposure stack consisting of realistic images with varying exposure values while avoiding artifacts on public benchmarks, compared with the existing methods. In addition, both the multi-exposure stacks and high dynamic range images estimated by the proposed method are significantly similar to the ground truth than other state-of-the-art algorithms.", "Transferring a low-dynamic-range (LDR) image to a high-dynamic-range (HDR) image, which is the so-called inverse tone mapping (iTM), is an important imaging technique to improve visual effects of imaging devices. In this paper, we propose a novel deep learning-based iTM method, which learns an inverse tone mapping network with a generative adversarial regularizer. In the framework of alternating optimization, we learn a U-Net-based HDR image generator to transfer input LDR images to HDR ones, and a simple CNN-based discriminator to classify the real HDR images and the generated ones. Specifically, when learning the generator we consider the content-related loss and the generative adversarial regularizer jointly to improve the stability and the robustness of the generated HDR images. Using the learned generator as the proposed inverse tone mapping network, we achieve superior iTM results to the state-of-the-art methods consistently." ] }
1812.07134
2904473683
This paper tackles high-dynamic-range (HDR) image reconstruction given only a single low-dynamic-range (LDR) image as input. While the existing methods focus on minimizing the mean-squared-error (MSE) between the target and reconstructed images, we minimize a hybrid loss that consists of perceptual and adversarial losses in addition to HDR-reconstruction loss. The reconstruction loss instead of MSE is more suitable for HDR since it puts more weight on both over- and under- exposed areas. It makes the reconstruction faithful to the input. Perceptual loss enables the networks to utilize knowledge about objects and image structure for recovering the intensity gradients of saturated and grossly quantized areas. Adversarial loss helps to select the most plausible appearance from multiple solutions. The hybrid loss that combines all the three losses is calculated in logarithmic space of image intensity so that the outputs retain a large dynamic range and meanwhile the learning becomes tractable. Comparative experiments conducted with other state-of-the-art methods demonstrated that our method produces a leap in image quality.
Apart from HDR reconstruction, we can see wider variety of deep-learning methods for image processing within LDR images, which are still useful as references. For example, convolutional GANs well-performed in superresolution @cite_30 , denoising @cite_28 , or inpaiting @cite_22 . Other than GANs, there are some promising approaches such as multiscale @cite_20 @cite_4 , perceptual losses @cite_16 , attention @cite_39 , or reinforcement learning @cite_1 @cite_9 . While their insights are useful also for our task, such methods for LDR images are not directly applicable to HDR images.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_22", "@cite_28", "@cite_9", "@cite_1", "@cite_39", "@cite_16", "@cite_20" ], "mid": [ "2523714292", "", "2342877626", "2798278116", "2949086814", "2797519004", "2768189935", "2950689937", "2557414982" ], "abstract": [ "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "", "We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.", "In this paper, we consider a typical image blind denoising problem, which is to remove unknown noise from noisy images. As we all know, discriminative learning based methods, such as DnCNN, can achieve state-of-the-art denoising results, but they are not applicable to this problem due to the lack of paired training data. To tackle the barrier, we propose a novel two-step framework. First, a Generative Adversarial Network (GAN) is trained to estimate the noise distribution over the input noisy images and to generate noise samples. Second, the noise patches sampled from the first step are utilized to construct a paired training dataset, which is used, in turn, to train a deep Convolutional Neural Network (CNN) for denoising. Extensive experiments have been done to demonstrate the superiority of our approach in image blind denoising.", "This paper tackles a new problem setting: reinforcement learning with pixel-wise rewards (pixelRL) for image processing. After the introduction of the deep Q-network, deep RL has been achieving great success. However, the applications of deep RL for image processing are still limited. Therefore, we extend deep RL to pixelRL for various image processing applications. In pixelRL, each pixel has an agent, and the agent changes the pixel value by taking an action. We also propose an effective learning method for pixelRL that significantly improves the performance by considering not only the future states of the own pixel but also those of the neighbor pixels. The proposed method can be applied to some image processing tasks that require pixel-wise manipulations, where deep RL has never been applied. We apply the proposed method to three image processing tasks: image denoising, image restoration, and local color enhancement. Our experimental results demonstrate that the proposed method achieves comparable or better performance, compared with the state-of-the-art methods based on supervised learning.", "We investigate a novel approach for image restoration by reinforcement learning. Unlike existing studies that mostly train a single large network for a specialized task, we prepare a toolbox consisting of small-scale convolutional networks of different complexities and specialized in different tasks. Our method, RL-Restore, then learns a policy to select appropriate tools from the toolbox to progressively restore the quality of a corrupted image. We formulate a step-wise reward function proportional to how well the image is restored at each step to learn the action policy. We also devise a joint learning scheme to train the agent and tools for better performance in handling uncertainty. In comparison to conventional human-designed networks, RL-Restore is capable of restoring images corrupted with complex and unknown distortions in a more parameter-efficient manner using the dynamically formed toolchain.", "Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. This injection of visual attention to both generative and discriminative networks is the main contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, they can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-the-art inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images." ] }
1812.07145
2904366862
Scene text recognition has received increased attention in the research community. Text in the wild often possesses irregular arrangements, typically including perspective text, curved text, oriented text. Most existing methods are hard to work well for irregular text, especially for severely distorted text. In this paper, we propose a novel Recurrent Calibration Network (RCN) for irregular scene text recognition. The RCN progressively calibrates the irregular text to boost the recognition performance. By decomposing the calibration process into multiple steps, the irregular text can be calibrated to normal one step by step. Besides, in order to avoid the accumulation of lost information caused by inaccurate transformation, we further design a fiducial-point refinement structure to keep the integrity of text during the recurrent process. Instead of the calibrated images, the coordinates of fiducial points are tracked and refined, which implicitly models the transformation information. Based on the refined fiducial points, we estimate the transformation parameters and sample from the original image at each step. In this way, the original character information is preserved until the final transformation. Such designs lead to optimal calibration results to boost the performance of succeeding recognition. Extensive experiments on challenging datasets demonstrate the superiority of our method, especially on irregular benchmarks.
Scene text recognition has been widely researched and numerous methods are proposed in recent years. Traditional methods recognized scene text in a character-level manner, which first performed detection to generate multiple candidates of character locations, then applied a character classifier for recognition. Wang @cite_2 detected each character by sliding window, and recognized it with a character classifier trained on the HOG descriptors. Bissacco @cite_18 designed a fully connected network to extract character feature representations, then used a language model to recognize characters. However, the performance of these methods is limited due to the inaccurate character detector. To be free from this problem, some methods directly learned the mapping between the entire word images and target strings. For example, Jaderberg @cite_0 assigned a class label to each word in a pre-defined lexicon and performed a 90k-class classification with CNN. Rodriguez-Serrano @cite_4 formulated the scene text recognition as a retrieval problem, which embedded word labels and word images into a common Euclidean space and found the closest word label in this space.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_4", "@cite_2" ], "mid": [ "1922126009", "2122221966", "1990550880", "1998042868" ], "abstract": [ "In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.", "We describe Photo OCR, a system for text extraction from images. Our particular focus is reliable text extraction from smartphone imagery, with the goal of text recognition as a user input modality similar to speech recognition. Commercially available OCR performs poorly on this task. Recent progress in machine learning has substantially improved isolated character classification, we build on this progress by demonstrating a complete OCR system using these techniques. We also incorporate modern data center-scale distributed language modelling. Our approach is capable of recognizing text in a variety of challenging imaging conditions where traditional OCR systems fail, notably in the presence of substantial blur, low resolution, low contrast, high image noise and other distortions. It also operates with low latency, mean processing time is 600 ms per image. We evaluate our system on public benchmark datasets for text extraction and outperform all previously reported results, more than halving the error rate on multiple benchmarks. The system is currently in use in many applications at Google, and is available as a user input modality in Google Translate for Android.", "The standard approach to recognizing text in images consists in first classifying local image regions into candidate characters and then combining them with high-level word models such as conditional random fields. This paper explores a new paradigm that departs from this bottom-up view. We propose to embed word labels and word images into a common Euclidean space. Given a word image to be recognized, the text recognition problem is cast as one of retrieval: find the closest word label in this space. This common space is learned using the Structured SVM framework by enforcing matching label-image pairs to be closer than non-matching pairs. This method presents several advantages: it does not require ad-hoc or costly pre- post-processing operations, it can build on top of any state-of-the-art image descriptor (Fisher vectors in our case), it allows for the recognition of never-seen-before words (zero-shot recognition) and the recognition process is simple and efficient, as it amounts to a nearest neighbor search. Experiments are performed on challenging datasets of license plates and scene text. The main conclusion of the paper is that with such a frugal approach it is possible to obtain results which are competitive with standard bottom-up approaches, thus establishing label embedding as an interesting and simple to compute baseline for text recognition.", "This paper focuses on the problem of word detection and recognition in natural images. The problem is significantly more challenging than reading text in scanned documents, and has only recently gained attention from the computer vision community. Sub-components of the problem, such as text detection and cropped image word recognition, have been studied in isolation [7, 4, 20]. However, what is unclear is how these recent approaches contribute to solving the end-to-end problem of word recognition. We fill this gap by constructing and evaluating two systems. The first, representing the de facto state-of-the-art, is a two stage pipeline consisting of text detection followed by a leading OCR engine. The second is a system rooted in generic object recognition, an extension of our previous work in [20]. We show that the latter approach achieves superior performance. While scene text recognition has generally been treated with highly domain-specific methods, our results demonstrate the suitability of applying generic computer vision methods. Adopting this approach opens the door for real world scene text recognition to benefit from the rapid advances that have been taking place in object recognition." ] }
1812.07145
2904366862
Scene text recognition has received increased attention in the research community. Text in the wild often possesses irregular arrangements, typically including perspective text, curved text, oriented text. Most existing methods are hard to work well for irregular text, especially for severely distorted text. In this paper, we propose a novel Recurrent Calibration Network (RCN) for irregular scene text recognition. The RCN progressively calibrates the irregular text to boost the recognition performance. By decomposing the calibration process into multiple steps, the irregular text can be calibrated to normal one step by step. Besides, in order to avoid the accumulation of lost information caused by inaccurate transformation, we further design a fiducial-point refinement structure to keep the integrity of text during the recurrent process. Instead of the calibrated images, the coordinates of fiducial points are tracked and refined, which implicitly models the transformation information. Based on the refined fiducial points, we estimate the transformation parameters and sample from the original image at each step. In this way, the original character information is preserved until the final transformation. Such designs lead to optimal calibration results to boost the performance of succeeding recognition. Extensive experiments on challenging datasets demonstrate the superiority of our method, especially on irregular benchmarks.
With the successful application of recurrent neural network (RNN) in sequence recognition, some researchers @cite_1 @cite_5 @cite_6 @cite_21 developed sequence-based methods and combined convolutional neural network (CNN) and RNN to encode the feature representations of word images. Shi @cite_1 and He @cite_5 both used the Connectionist Temporal Classification (CTC) @cite_14 loss to calculate the conditional probabilities between the outputs of RNN and the target sequences. After that, Shi @cite_6 and Li @cite_21 introduced an attention mechanism to adaptively weight the features and select the most relevant feature representations in RNN-based decoder. In order to eliminate attention drift problem, Cheng @cite_8 employed a focusing attention mechanism to automatically adjust the attention weights. Bai @cite_28 proposed the edit probability to estimate the probability of generating a string while considering possible occurrences of missing or superfluous characters. Although these approaches have shown promising results, they cannot effectively handle with the irregular text. The main reason is that word images are encoded into 1D feature sequences, but the irregular text is not horizontally arranged.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_28", "@cite_21", "@cite_1", "@cite_6", "@cite_5" ], "mid": [ "2127141656", "2963054155", "2798484463", "2294053032", "", "", "1924985727" ], "abstract": [ "Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.", "Scene text recognition has been a hot research topic in computer vision due to its various applications. The state of the art is the attention-based encoder-decoder framework that learns the mapping between input images and output sequences in a purely data-driven way. However, we observe that existing attention-based methods perform poorly on complicated and or low-quality images. One major reason is that existing methods cannot get accurate alignments between feature areas and targets for such images. We call this phenomenon “attention drift”. To tackle this problem, in this paper we propose the FAN (the abbreviation of Focusing Attention Network) method that employs a focusing attention mechanism to automatically draw back the drifted attention. FAN consists of two major components: an attention network (AN) that is responsible for recognizing character targets as in the existing methods, and a focusing network (FN) that is responsible for adjusting attention by evaluating whether AN pays attention properly on the target areas in the images. Furthermore, different from the existing methods, we adopt a ResNet-based network to enrich deep representations of scene text images. Extensive experiments on various benchmarks, including the IIIT5k, SVT and ICDAR datasets, show that the FAN method substantially outperforms the existing methods.", "We consider the scene text recognition problem under the attention-based encoder-decoder framework, which is the state of the art. The existing methods usually employ a frame-wise maximal likelihood loss to optimize the models. When we train the model, the misalignment between the ground truth strings and the attention's output sequences of probability distribution, which is caused by missing or superfluous characters, will confuse and mislead the training process, and consequently make the training costly and degrade the recognition accuracy. To handle this problem, we propose a novel method called edit probability (EP) for scene text recognition. EP tries to effectively estimate the probability of generating a string from the output sequence of probability distribution conditioned on the input image, while considering the possible occurrences of missing superfluous characters. The advantage lies in that the training process can focus on the missing, superfluous and unrecognized characters, and thus the impact of the misalignment problem can be alleviated or even overcome. We conduct extensive experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets. Experimental results show that the EP can substantially boost scene text recognition performance.", "We present recursive recurrent neural networks with attention modeling (R2AM) for lexicon-free optical character recognition in natural scene images. The primary advantages of the proposed method are: (1) use of recursive convolutional neural networks (CNNs), which allow for parametrically efficient and effective image feature extraction, (2) an implicitly learned character-level language model, embodied in a recurrent neural network which avoids the need to use N-grams, and (3) the use of a soft-attention mechanism, allowing the model to selectively exploit image features in a coordinated way, and allowing for end-to-end training within a standard backpropagation framework. We validate our method with state-of-the-art performance on challenging benchmark datasets: Street View Text, IIIT5k, ICDAR and Synth90k.", "", "", "We develop a Deep-Text Recurrent Network (DTRN) that regards scene text reading as a sequence labelling problem. We leverage recent advances of deep convolutional neural networks to generate an ordered high-level sequence from a whole word image, avoiding the difficult character segmentation problem. Then a deep recurrent model, building on long short-term memory (LSTM), is developed to robustly recognize the generated CNN sequences, departing from most existing approaches recognising each character independently. Our model has a number of appealing properties in comparison to existing scene text recognition methods: (i) It can recognise highly ambiguous words by leveraging meaningful context information, allowing it to work reliably without either pre- or post-processing; (ii) the deep CNN feature is robust to various image distortions; (iii) it retains the explicit order information in word image, which is essential to discriminate word strings; (iv) the model does not depend on pre-defined dictionary, and it can process unknown words and arbitrary strings. Codes for the DTRN will be available." ] }
1812.07260
2905125557
Interactive image segmentation algorithms rely on the user to provide annotations as the guidance. When the task of interactive segmentation is performed on a small touchscreen device, the requirement of providing precise annotations could be cumbersome to the user. We design an efficient seed proposal method that actively proposes annotation seeds for the user to label. The user only needs to check which ones of the query seeds are inside the region of interest (ROI). We enforce the sparsity and diversity criteria on the selection of the query seeds. At each round of interaction the user is only presented with a small number of informative query seeds that are far apart from each other. As a result, we are able to derive a user friendly interaction mechanism for annotation on small touchscreen devices. The user merely has to swipe through on the ROI-relevant query seeds, which should be easy since those gestures are commonly used on a touchscreen. The performance of our algorithm is evaluated on six publicly available datasets. The evaluation results show that our algorithm achieves high segmentation accuracy, with short response time and less user feedback.
Many well-known interactive image segmentation algorithms are in this category, , @cite_9 @cite_31 @cite_40 @cite_36 @cite_24 @cite_18 @cite_1 @cite_19 @cite_26 @cite_20 @cite_34 @cite_39 @cite_45 @cite_22 @cite_43 @cite_11 , in which the user directly specifies the location of each label via seeds scribbles @cite_9 @cite_40 @cite_36 @cite_18 @cite_1 @cite_24 @cite_45 @cite_43 @cite_11 , contours @cite_19 @cite_26 @cite_34 @cite_39 @cite_22 , or bounding boxes @cite_31 @cite_20 @cite_22 . These algorithms use graph cuts , random walks , level set , geodesic distance , or deep network to segment the images according to the user annotations.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_22", "@cite_36", "@cite_9", "@cite_1", "@cite_39", "@cite_24", "@cite_19", "@cite_40", "@cite_45", "@cite_43", "@cite_31", "@cite_34", "@cite_20", "@cite_11" ], "mid": [ "2125637308", "2083277843", "", "", "2169551590", "2168555635", "2566922557", "2169374938", "2104095591", "1693210201", "2300469113", "2776163999", "2137592810", "", "2124351162", "" ], "abstract": [ "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs", "We present a new, interactive tool called Intelligent Scissors which we use for image segmentation and composition. Fully automated segmentation is an unsolved problem, while manual tracing is inaccurate and laboriously unacceptable. However, Intelligent Scissors allow objects within digital images to be extracted quickly and accurately using simple gesture motions with a mouse. When the gestured mouse position comes in proximity to an object edge, a live-wire boundary “snaps” to, and wraps around the object of interest. Live-wire boundary detection formulates discrete dynamic programming (DP) as a two-dimensional graph searching problem. DP provides mathematically optimal boundaries while greatly reducing sensitivity to local noise or other intervening structures. Robustness is further enhanced with on-the-fly training which causes the boundary to adhere to the specific type of edge currently being followed, rather than simply the strongest edge in the neighborhood. Boundary cooling automatically freezes unchanging segments and automates input of additional seed points. Cooling also allows the user to be much more free with the gesture path, thereby increasing the efficiency and finesse with which boundaries can be extracted. Extracted objects can be scaled, rotated, and composited using live-wire masks and spatial frequency equivalencing. Frequency equivalencing is performed by applying a Butterworth filter which matches the lowest frequency spectra to all other image components. Intelligent Scissors allow creation of convincing compositions from existing images while dramatically increasing the speed and precision with which objects can be extracted.", "", "", "In this paper we describe a new technique for general purpose interactive segmentation of N-dimensional images. The user marks certain pixels as \"object\" or \"background\" to provide hard constraints for segmentation. Additional soft constraints incorporate both boundary and region information. Graph cuts are used to find the globally optimal segmentation of the N-dimensional image. The obtained solution gives the best balance of boundary and region properties among all segmentations satisfying the constraints. The topology of our segmentation is unrestricted and both \"object\" and \"background\" segments may consist of several isolated parts. Some experimental results are presented in the context of photo video editing and medical image segmentation. We also demonstrate an interesting Gestalt example. A fast implementation of our segmentation method is possible via a new max-flow algorithm.", "In this paper we introduce a new shape constraint for interactive image segmentation. It is an extension of Veksler's [25] star-convexity prior, in two ways: from a single star to multiple stars and from Euclidean rays to Geodesic paths. Global minima of the energy function are obtained subject to these new constraints. We also introduce Geodesic Forests, which exploit the structure of shortest paths in implementing the extended constraints. The star-convexity prior is used here in an interactive setting and this is demonstrated in a practical system. The system is evaluated by means of a “robot user” to measure the amount of interaction required in a precise way. We also introduce a new and harder dataset which augments the existing Grabcut dataset [1] with images and ground truth taken from the PASCAL VOC segmentation challenge [7].", "We present a new family of snakes that satisfy the property of multiresolution by exploiting subdivision schemes. We show in a generic way how to construct such snakes based on an admissible subdivision mask. We derive the necessary energy formulations and provide the formulas for their efficient computation. Depending on the choice of the mask, such models have the ability to reproduce trigonometric or polynomial curves. They can also be designed to be interpolating, a property that is useful in user-interactive applications. We provide explicit examples of subdivision snakes and illustrate their use for the segmentation of bioimages. We show that they are robust in the presence of noise and provide a multiresolution algorithm to enlarge their basin of attraction, which decreases their dependence on initialization compared to singleresolution snakes. We show the advantages of the proposed model in terms of computation and segmentation of structures with different sizes.", "We present TouchCut; a robust and efficient algorithm for segmenting image and video sequences with minimal user interaction. Our algorithm requires only a single finger touch to identify the object of interest in the image or first frame of video. Our approach is based on a level set framework, with an appearance model fusing edge, region texture and geometric information sampled local to the touched point. We first present our image segmentation solution, then extend this framework to progressive (per-frame) video segmentation, encouraging temporal coherence by incorporating motion estimation and a shape prior learned from previous frames. This new approach to visual object cut-out provides a practical solution for image and video segmentation on compact touch screen devices, facilitating spatially localized media manipulation. We describe such a case study, enabling users to selectively stylize video objects to create a hand-painted effect. We demonstrate the advantages of TouchCut by quantitatively comparing against the state of the art both in terms of accuracy, and run-time performance.", "A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest.", "We propose a novel interactive cosegmentation method using global and local energy optimization. The global energy includes two terms: 1) the global scribbled energy and 2) the interimage energy. The first one utilizes the user scribbles to build the Gaussian mixture model and improve the cosegmentation performance. The second one is a global constraint, which attempts to match the histograms of common objects. To minimize the local energy, we apply the spline regression to learn the smoothness in a local neighborhood. This energy optimization can be converted into a constrained quadratic programming problem. To reduce the computational complexity, we propose an iterative optimization algorithm to decompose this optimization problem into several subproblems. The experimental results show that our method outperforms the state-of-the-art unsupervised cosegmentation and interactive cosegmentation methods on the iCoseg and MSRC benchmark data sets.", "Interactive object selection is a very important research problem and has many applications. Previous algorithms require substantial user interactions to estimate the foreground and background distributions. In this paper, we present a novel deep-learning-based algorithm which has much better understanding of objectness and can reduce user interactions to just a few clicks. Our algorithm transforms user-provided positive and negative clicks into two Euclidean distance maps which are then concatenated with the RGB channels of images to compose (image, user interactions) pairs. We generate many of such pairs by combining several random sampling strategies to model users' click patterns and use them to finetune deep Fully Convolutional Networks (FCNs). Finally the output probability maps of our FCN-8s model is integrated with graph cut optimization to refine the boundary segments. Our model is trained on the PASCAL segmentation dataset and evaluated on other datasets with different object classes. Experimental results on both seen and unseen objects demonstrate that our algorithm has a good generalization ability and is superior to all existing interactive object selection approaches.", "The interactive image segmentation model allows users to iteratively add new inputs for refinement until a satisfactory result is finally obtained. Therefore, an ideal interactive segmentation model should learn to capture the user's intention with minimal interaction. However, existing models fail to fully utilize the valuable user input information in the segmentation refinement process and thus offer an unsatisfactory user experience. In order to fully exploit the user-provided information, we propose a new deep framework, called Regional Interactive Segmentation Network (RIS-Net), to expand the field-of-view of the given inputs to capture the local regional information surrounding them for local refinement. Additionally, RIS-Net adopts multiscale global contextual information to augment each local region for improving feature representation. We also introduce click discount factors to develop a novel optimization strategy for more effective end-to-end training. Comprehensive evaluations on four challenging datasets well demonstrate the superiority of the proposed RIS-Net over other state-of-the-art approaches.", "Figure-ground segmentation from bounding box input, provided either automatically or manually, has been extremely popular in the last decade and influenced various applications. A lot of research has focused on high-quality segmentation, using complex formulations which often lead to slow techniques, and often hamper practical usage. In this paper we demonstrate a very fast segmentation technique which still achieves very high quality results. We propose to replace the time consuming iterative refinement of global colour models in traditional GrabCut formulation by a densely connected crf. To motivate this decision, we show that a dense crf implicitly models unnormalized global colour models for foreground and background. Such relationship provides insightful analysis to bridge between dense crf and GrabCut functional. We extensively evaluate our algorithm using two famous benchmarks. Our experimental results demonstrated that the proposed algorithm achieves an order of magnitude 10× speed-up with respect to the closest competitor, and at the same time achieves a considerably higher accuracy.", "", "The problem of efficient, interactive foreground background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for \"border matting\" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools.", "" ] }
1812.07260
2905125557
Interactive image segmentation algorithms rely on the user to provide annotations as the guidance. When the task of interactive segmentation is performed on a small touchscreen device, the requirement of providing precise annotations could be cumbersome to the user. We design an efficient seed proposal method that actively proposes annotation seeds for the user to label. The user only needs to check which ones of the query seeds are inside the region of interest (ROI). We enforce the sparsity and diversity criteria on the selection of the query seeds. At each round of interaction the user is only presented with a small number of informative query seeds that are far apart from each other. As a result, we are able to derive a user friendly interaction mechanism for annotation on small touchscreen devices. The user merely has to swipe through on the ROI-relevant query seeds, which should be easy since those gestures are commonly used on a touchscreen. The performance of our algorithm is evaluated on six publicly available datasets. The evaluation results show that our algorithm achieves high segmentation accuracy, with short response time and less user feedback.
Another line is the indirect interactive image segmentation @cite_17 @cite_41 @cite_3 @cite_13 @cite_5 , in which the algorithms usually recommend several uncertain regions to the user, and then the segmentation algorithms adopt the user-selected regions for updating the segmentation results. Batra @cite_17 propose a co-segmentation algorithm that provides the suggestion about where the user should draw scribbles next. Based on the active learning method, Fathi @cite_41 present an incremental self-training video segmentation method to ask the user to provide annotations for gradually labeling the frames. For scene reconstruction, Kowdle @cite_3 also employ an active learning algorithm to query the user's scribbles about the uncertain regions. To segment a large 3D dataset, Straehl @cite_5 provide various uncertainty measurements to suggest the user some candidate locations, and then segment the dataset using the watershed cut according to the user-selected locations. Rupprecht @cite_13 model the segmentation uncertainty as a probability distribution over the set of sampled figure-ground segmentations, the collected segmentations are used to calculate the most uncertain region to ask the label from the user. Chen @cite_21 select the query-pixel with the highest uncertainty referred to the transductive inference measurement.
{ "cite_N": [ "@cite_41", "@cite_21", "@cite_3", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "2124225542", "2591961327", "2057334901", "1971254795", "1907845728", "1964884769" ], "abstract": [ "This work addresses the problem of segmenting an object of interest out of a video. We show that video object segmentation can be naturally cast as a semi-supervised learning problem and be efficiently solved using harmonic functions. We propose an incremental self-training approach by iteratively labeling the least uncertain frame and updating similarity metrics. Our self-training video segmentation produces superior results both qualitatively and quantitatively. Moreover, usage of harmonic functions naturally supports interactive segmentation. We suggest active learning methods for providing guidance to user on what to annotate in order to improve labeling efficiency. We present experimental results using a ground truth data set and a quantitative comparison to a representative object segmentation system.", "This paper presents an efficient algorithm for interactive image segmentation that responds to 1-bit user feedback. The goal of this type of segmentation is to propose a sequence of yes-or-no questions to the user. Then, according to the 1-bit answers from the user, the segmentation algorithm progressively revises the questions and the segments, so that the segmentation result can approach the ideal region of interest (ROI) in the mind of the user. We define a question as an event that whether a chosen superpixel hits the ROI or not. In general, an interactive image segmentation algorithm is better to achieve high segmentation accuracy, low response time, and simple manipulation. We fulfill these demands by designing an efficient interactive segmentation algorithm from 1-bit user feedback. Our algorithm employs techniques from over-segmentation, entropy calculation, and transductive inference. Over-segmentation reduces the solution set of questions and the computational costs of transductive inference. Entropy calculation provides a way to characterize the query order of superpixels. Transductive inference is used to estimate the similarity between superpixels and to partition the superpixels into ROI and region of uninterest (ROU). Following the clues from the similarity between superpixels, we design the query-superpixel selection mechanism for human-machine interaction. Our key idea is to narrow down the solution set of questions, and then to propose the most informative question based on the clues of the similarities among the superpixels. We assess our method on four publicly available datasets. The experiments demonstrate that our method provides a plausible solution to the problem of interactive image segmentation with merely 1-bit user feedback.", "This paper presents an active-learning algorithm for piecewise planar 3D reconstruction of a scene. While previous interactive algorithms require the user to provide tedious interactions to identify all the planes in the scene, we build on successful ideas from the automatic algorithms and introduce the idea of active learning, thereby improving the reconstructions while considerably reducing the effort. Our algorithm first attempts to obtain a piecewise planar reconstruction of the scene automatically through an energy minimization framework. The proposed active-learning algorithm then uses intuitive cues to quantify the uncertainty of the algorithm and suggest regions, querying the user to provide support for the uncertain regions via simple scribbles. These interactions are used to suitably update the algorithm, leading to better reconstructions. We show through machine experiments and a user study that the proposed approach can intelligently query users for interactions on informative regions, and users can achieve better reconstructions of the scene faster, especially for scenes with texture-less surfaces lacking cues like lines which automatic algorithms rely on.", "Watershed cuts are among the fastest segmentation algorithms and therefore well suited for interactive segmentation of very large 3D data sets. To minimize the number of user interactions (“seeds”) required until the result is correct, we want the computer to actively query the human for input at the most critical locations, in analogy to active learning. These locations are found by means of suitable uncertainty measures. We propose various such measures for watershed cuts along with a theoretical analysis of some of their properties. Extensive evaluation on two types of 3D electron microscopic volumes of neural tissue shows that measures which estimate the non-local consequences of new user inputs achieve performance close to an oracle endowed with complete knowledge of the ground truth.", "Consider the following scenario between a human user and the computer. Given an image, the user thinks of an object to be segmented within this picture, but is only allowed to provide binary inputs to the computer (yes or no). In these conditions, can the computer guess this hidden segmentation by asking well-chosen questions to the user? We introduce a strategy for the computer to increase the accuracy of its guess in a minimal number of questions. At each turn, the current belief about the answer is encoded in a Bayesian fashion via a probability distribution over the set of all possible segmentations. To efficiently handle this huge space, the distribution is approximated by sampling representative segmentations using an adapted version of the Metropolis-Hastings algorithm, whose proposal moves build on a geodesic distance transform segmentation method. Following a dichotomic search, the question halving the weighted set of samples is finally picked, and the provided answer is used to update the belief for the upcoming rounds. The performance of this strategy is assessed on three publicly available datasets with diverse visual properties. Our approach shows to be a tractable and very adaptive solution to this problem.", "This paper presents an algorithm for Interactive Co-segmentation of a foreground object from a group of related images. While previous approaches focus on unsupervised co-segmentation, we use successful ideas from the interactive object-cutout literature. We develop an algorithm that allows users to decide what foreground is, and then guide the output of the co-segmentation algorithm towards it via scribbles. Interestingly, keeping a user in the loop leads to simpler and highly parallelizable energy functions, allowing us to work with significantly more images per group. However, unlike the interactive single image counterpart, a user cannot be expected to exhaustively examine all cutouts (from tens of images) returned by the system to make corrections. Hence, we propose iCoseg, an automatic recommendation system that intelligently recommends where the user should scribble next. We introduce and make publicly available the largest co-segmentation datasetyet, the CMU-Cornell iCoseg Dataset, with 38 groups, 643 images, and pixelwise hand-annotated groundtruth. Through machine experiments and real user studies with our developed interface, we show that iCoseg can intelligently recommend regions to scribble on, and users following these recommendations can achieve good quality cutouts with significantly lower time and effort than exhaustively examining all cutouts." ] }
1812.07260
2905125557
Interactive image segmentation algorithms rely on the user to provide annotations as the guidance. When the task of interactive segmentation is performed on a small touchscreen device, the requirement of providing precise annotations could be cumbersome to the user. We design an efficient seed proposal method that actively proposes annotation seeds for the user to label. The user only needs to check which ones of the query seeds are inside the region of interest (ROI). We enforce the sparsity and diversity criteria on the selection of the query seeds. At each round of interaction the user is only presented with a small number of informative query seeds that are far apart from each other. As a result, we are able to derive a user friendly interaction mechanism for annotation on small touchscreen devices. The user merely has to swipe through on the ROI-relevant query seeds, which should be easy since those gestures are commonly used on a touchscreen. The performance of our algorithm is evaluated on six publicly available datasets. The evaluation results show that our algorithm achieves high segmentation accuracy, with short response time and less user feedback.
The purpose of object proposal generation @cite_30 @cite_23 @cite_25 @cite_16 @cite_6 @cite_35 is to provide a relatively small set of bounding boxes or segments covering probable object locations in an image, so that an object detector does not have to examine exhaustively all possible locations in a sliding window manner. To increase the recall rate for object detection, a common solution in proposal generation is to diversify the proposals. For example, Carreira and Sminchisescu @cite_23 present a diversifying strategy, which is based on the maximal marginal relevance measure @cite_8 , to improve the object detection recall. Besides diversifying the proposals in spatial domain, diversifying the proposals by their similarities in feature domain has also been adopted @cite_25 @cite_16 @cite_6 @cite_35 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_8", "@cite_6", "@cite_23", "@cite_16", "@cite_25" ], "mid": [ "1991367009", "1958879265", "2083305840", "1922839076", "2017691720", "2088049833", "2121660792" ], "abstract": [ "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.", "Distance metric plays a key role in grouping superpixels to produce object proposals for object detection. We observe that existing distance metrics work primarily for low complexity cases. In this paper, we develop a novel distance metric for grouping two superpixels in high-complexity scenarios. Combining them, a complexity-adaptive distance measure is produced that achieves improved grouping in different levels of complexity. Our extensive experimentation shows that our method can achieve good results in the PASCAL VOC 2012 dataset surpassing the latest state-of-the-art methods.", "This paper presents a method for combining query-relevance with information-novelty in the context of text retrieval and summarization. The Maximal Marginal Relevance (MMR) criterion strives to reduce redundancy while maintaining query relevance in re-ranking retrieved documents and in selecting apprw priate passages for text summarization. Preliminary results indicate some benefits for MMR diversity ranking in document retrieval and in single document summarization. The latter are borne out by the recent results of the SUMMAC conference in the evaluation of summarization systems. However, the clearest advantage is demonstrated in constructing non-redundant multi-document summaries, where MMR results are clearly superior to non-MMR passage selection.", "Hierarchical segmentation based object proposal methods have become an important step in modern object detection paradigm. However, standard single-way hierarchical methods are fundamentally flawed in that the errors in early steps cannot be corrected and accumulate. In this work, we propose a novel multi-branch hierarchical segmentation approach that alleviates such problems by learning multiple merging strategies in each step in a complementary manner, such that errors in one merging strategy could be corrected by the others. Our approach achieves the state-of-the-art performance for both object proposal and object detection tasks, comparing to previous object proposal methods.", "We present a novel framework for generating and ranking plausible objects hypotheses in an image using bottom-up processes and mid-level cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge about properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. We then learn to rank the object hypotheses by training a continuous model to predict how plausible the segments are, given their mid-level region properties. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC09 segmentation dataset. It achieves the same average best segmentation covering as the best performing technique to date [2], 0.61 when using just the top 7 ranked segments, instead of the full hierarchy in [2]. Our method achieves 0.78 average best covering using 154 segments. In a companion paper [18], we also show that the algorithm achieves state-of-the art results when used in a segmentation-based recognition pipeline.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "Generic object detection is the challenging task of proposing windows that localize all the objects in an image, regardless of their classes. Such detectors have recently been shown to benefit many applications such as speeding-up class-specific object detection, weakly supervised learning of object detectors and object discovery. In this paper, we introduce a novel and very efficient method for generic object detection based on a randomized version of Prim's algorithm. Using the connectivity graph of an image's super pixels, with weights modelling the probability that neighbouring super pixels belong to the same object, the algorithm generates random partial spanning trees with large expected sum of edge weights. Object localizations are proposed as bounding-boxes of those partial trees. Our method has several benefits compared to the state-of-the-art. Thanks to the efficiency of Prim's algorithm, it samples proposals very quickly: 1000 proposals are obtained in about 0.7s. With proposals bound to super pixel boundaries yet diversified by randomization, it yields very high detection rates and windows that tightly fit objects. In extensive experiments on the challenging PASCAL VOC 2007 and 2012 and SUN2012 benchmark datasets, we show that our method improves over state-of-the-art competitors for a wide range of evaluation scenarios." ] }
1812.07096
2898925204
Abstract Wireless communication environments comprise passive objects that cause performance degradation and eavesdropping concerns due to anomalous scattering. This paper proposes a new paradigm, where scattering becomes software-defined and, subsequently, optimizable across wide frequency ranges. Through the proposed programmable wireless environments, the path loss, multi-path fading and interference effects can be controlled and mitigated. Moreover, the eavesdropping can be prevented via novel physical layer security capabilities. The core technology of this new paradigm is the concept of metasurfaces, which are planar intelligent structures whose effects on impinging electromagnetic waves are fully defined by their micro-structure. Their control over impinging waves has been demonstrated to span from 1 GHz to 10 THz. This paper contributes the software-programmable wireless environment, consisting of several HyperSurface tiles (programmable metasurfaces) controlled by a central server. HyperSurfaces are a novel class of metasurfaces whose structure and, hence, electromagnetic behavior can be altered and controlled via a software interface. Multiple networked tiles coat indoor objects, allowing fine-grained, customizable reflection, absorption or polarization overall. A central server calculates and deploys the optimal electromagnetic interaction per tile, to the benefit of communicating devices. Realistic simulations using full 3D ray-tracing demonstrate the groundbreaking performance and security potential of the proposed approach in 2.4 GHz and 60 GHz frequencies.
Phased array antennas have been used to actively and potentially adaptively alter the probabilistic behavior of a channel. Array panels hung from walls have been shown to influence considerably the communication quality of wireless devices . Phased array antennas comprise several half- or quarter-wavelength antennas, combined with hardware to control their relative phase. Altering the relative phase of an antenna corresponds to a local change in the reflective index of the array @cite_0 . Thus, proper phase configurations allow for anomalous wave steering and even absorbing. However, the phase-based operation is coherent and deterministic only at the far-field. For a square panel with size @math m and operating frequency of @math GHz, the far field extends beyond @math m. For @math GHz the far field limit is at @math m. This constitutes indoors applicability difficult, even for very small panels. Size-able deployments can also be limited by the cost and power consumption of the phase control hardware.
{ "cite_N": [ "@cite_0" ], "mid": [ "2121607567" ], "abstract": [ "Conventional optical components rely on gradual phase shifts accumulated during light propagation to shape light beams. New degrees of freedom are attained by introducing abrupt phase changes over the scale of the wavelength. A two-dimensional array of optical resonators with spatially varying phase response and subwavelength separation can imprint such phase discontinuities on propagating light as it traverses the interface between two media. Anomalous reflection and refraction phenomena are observed in this regime in optically thin arrays of met allic antennas on silicon with a linear phase variation along the interface, which are in excellent agreement with generalized laws derived from Fermat’s principle. Phase discontinuities provide great flexibility in the design of light beams, as illustrated by the generation of optical vortices through use of planar designer met allic interfaces." ] }
1812.07096
2898925204
Abstract Wireless communication environments comprise passive objects that cause performance degradation and eavesdropping concerns due to anomalous scattering. This paper proposes a new paradigm, where scattering becomes software-defined and, subsequently, optimizable across wide frequency ranges. Through the proposed programmable wireless environments, the path loss, multi-path fading and interference effects can be controlled and mitigated. Moreover, the eavesdropping can be prevented via novel physical layer security capabilities. The core technology of this new paradigm is the concept of metasurfaces, which are planar intelligent structures whose effects on impinging electromagnetic waves are fully defined by their micro-structure. Their control over impinging waves has been demonstrated to span from 1 GHz to 10 THz. This paper contributes the software-programmable wireless environment, consisting of several HyperSurface tiles (programmable metasurfaces) controlled by a central server. HyperSurfaces are a novel class of metasurfaces whose structure and, hence, electromagnetic behavior can be altered and controlled via a software interface. Multiple networked tiles coat indoor objects, allowing fine-grained, customizable reflection, absorption or polarization overall. A central server calculates and deploys the optimal electromagnetic interaction per tile, to the benefit of communicating devices. Realistic simulations using full 3D ray-tracing demonstrate the groundbreaking performance and security potential of the proposed approach in 2.4 GHz and 60 GHz frequencies.
Un-phased antenna deployments have also been proposed as a cheaper and simpler alternative. In this case, simple antennas are placed over planar objects at relatively large distances to avoid coupling effects. Control over the EM waves is exert only at the antenna positions, while most of the surface of the planar object continues to interact uncontrollably with EM waves. Thus, deterministic control is not attained, even at the far-field. Instead, this approach attains a probabilistic effect in the channel behavior, which can be quantified via measurements after deployment has taken place @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "2769583887" ], "abstract": [ "Smart spaces, such as smart homes and smart offices, are common Internet of Things (IoT) scenarios for building automation with networked sensors. In this paper, we suggest a different notion of smart spaces, where the radio environment is programmable to achieve desirable link quality within the space. We envision deploying low-cost devices embedded in the walls of a building to passively reflect or actively transmit radio signals. This is a significant departure from typical approaches to optimizing endpoint radios and individual links to improve performance. In contrast to previous work combating or leveraging per-link multipath fading, we actively reconfigure the multipath propagation. We sketch design and implementation directions for such a programmable radio environment, highlighting the computational and operational challenges our architecture faces. Preliminary experiments demonstrate the efficacy of using passive elements to change the wireless channel, shifting frequency \"nulls\" by nine Wi-Fi subcarriers, changing the 2 x 2 MIMO channel condition number by 1.5 dB, and attenuating or enhancing signal strength by up to 26 dB." ] }
1812.07107
2916461310
This article defines encrypted gate, which is denoted by @math . We present a gate-teleportation-based two-party computation scheme for @math , where one party gives arbitrary quantum state @math as input and obtains the encrypted @math -computing result @math , and the other party obtains the random bits @math . Based on @math , we propose a method to remove the @math -error generated in the homomorphic evaluation of @math -gate. Using this method, we design two non-interactive and perfectly secure QHE schemes named GT and VGT . Both of them are @math -homomorphic and quasi-compact (the decryption complexity depends on the @math -gate complexity). Assume @math -homomorphism, non-interaction and perfect security are necessary property, the quasi-compactness is proved to be bounded by @math , where @math is the total number of @math -gates in the evaluated circuit. VGT is proved to be optimal and has @math -quasi-compactness. According to our QHE schemes, the decryption would be inefficient if the evaluated circuit contains exponential number of @math -gates. Thus our schemes are suitable for homomorphic evaluation of any quantum circuit with low @math -gate complexity, such as any polynomial-size quantum circuit or any quantum circuit with polynomial number of @math -gates.
The QHE scheme EPR proposed by Broadbent and Jeffery @cite_6 makes use of Bell state and quantum measurement. That scheme is constructed by the combination of QOTP and classical FHE, then it is computational secure. Moreover, it is proved to be @math -quasi-compact, where @math is the number of @math -gates in an evaluated circuit. In this article, our schemes also make use of Bell state and quantum measurement. However, our scheme VGT has perfect security and @math -quasi-compactness, so it is better than EPR .
{ "cite_N": [ "@cite_6" ], "mid": [ "1900181948" ], "abstract": [ "Fully homomorphic encryption is an encryption method with the property that any computation on the plaintext can be performed by a party having access to the ciphertext only. Here, we formally define and give schemes for quantum homomorphic encryption, which is the encryption of quantum information such that quantum computations can be performed given the ciphertext only. Our schemes allow for arbitrary Clifford group gates, but become inefficient for circuits with large complexity, measured in terms of the non-Clifford portion of the circuit (we use the “ ( 8 )” non-Clifford group gate, also known as the ( T )-gate)." ] }
1812.07107
2916461310
This article defines encrypted gate, which is denoted by @math . We present a gate-teleportation-based two-party computation scheme for @math , where one party gives arbitrary quantum state @math as input and obtains the encrypted @math -computing result @math , and the other party obtains the random bits @math . Based on @math , we propose a method to remove the @math -error generated in the homomorphic evaluation of @math -gate. Using this method, we design two non-interactive and perfectly secure QHE schemes named GT and VGT . Both of them are @math -homomorphic and quasi-compact (the decryption complexity depends on the @math -gate complexity). Assume @math -homomorphism, non-interaction and perfect security are necessary property, the quasi-compactness is proved to be bounded by @math , where @math is the total number of @math -gates in the evaluated circuit. VGT is proved to be optimal and has @math -quasi-compactness. According to our QHE schemes, the decryption would be inefficient if the evaluated circuit contains exponential number of @math -gates. Thus our schemes are suitable for homomorphic evaluation of any quantum circuit with low @math -gate complexity, such as any polynomial-size quantum circuit or any quantum circuit with polynomial number of @math -gates.
@cite_15 prove a no-go result: if interaction is not allowed, there does not exist QFHE scheme with perfect security. An enhanced no-go result has been proved independently by Newman and Shi @cite_18 and Lai and Chung @cite_27 : if interaction is not allowed, there does not exist ITS QFHE scheme. This article focuses on the non-interactive and perfectly secure QHE schemes. Our schemes are quasi-compact, so they are not QFHE schemes, and then our result does not contradict with those no-go results. Though our schemes are not QFHE schemes, they can implement any unitary quantum circuit homomorphically (maybe inefficiently).
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_18" ], "mid": [ "", "2079128711", "2098611377" ], "abstract": [ "", "Homomorphic encryption is a form of encryption which allows computation to be carried out on the encrypted data without the need for decryption. The success of quantum approaches to related tasks in a delegated computation setting has raised the question of whether quantum mechanics may be used to achieve information-theoretically-secure fully homomorphic encryption. Here we show, via an information localization argument, that deterministic fully homomorphic encryption necessarily incurs exponential overhead if perfect security is required.", "We investigate how a classical private key can be used by two players, connected by an insecure one-way quantum channel, to perform private communication of quantum information. In particular, we show that in order to transmit n qubits privately, 2n bits of shared private key are necessary and sufficient. This result may be viewed as the quantum analogue of the classical one-time pad encryption scheme." ] }
1812.07060
2905308012
Neural network pruning is an important step in design process of efficient neural networks for edge devices with limited computational power. Pruning is a form of knowledge transfer from the weights of the original network to a smaller target subnetwork. We propose a new method for compute-constrained structured channel-wise pruning of convolutional neural networks. The method iteratively fine-tunes the network, while gradually tapering the computation resources available to the pruned network via a holonomic constraint in the method of Lagrangian multipliers framework. An explicit and adaptive automatic control over the rate of tapering is provided. The trainable parameters of our pruning method are separate from the weights of the neural network, which allows us to avoid the interference with the neural network solver (e.g. avoid the direct dependence of pruning speed on neural network learning rates). Our method combines the rigoristic' approach by the direct application of constrained optimization, avoiding the pitfalls of ADMM-based methods, like their need to define the target amount of resources for each pruning run, and direct dependence of pruning speed and priority of pruning on the relative scale of weights between layers. For VGG-16 @ ILSVRC-2012, we achieve reduction of 15.47 -> 3.87 GMAC with only 1 top-1 accuracy reduction (68.4 -> 67.4 ). For AlexNet @ ILSVRC-2012, we achieve 0.724 -> 0.411 GMAC with 1 top-1 accuracy reduction (56.8 -> 55.8 ).
One large branch of pruning methods stems from the basic scheme of Han et al. (2015) @cite_18 , we'll call them heuristic methods''. These methods repeatedly choose elements based on some scalar metric (salience), and remove them from the network. Each iteration of removal is followed by fine-tuning. Salience can be based on @math norm of element weights @cite_18 @cite_7 @cite_2 @cite_1 @cite_5 @cite_21 , Taylor estimate of change in loss from element removal @cite_3 , percentage of zeros in channel weights (APoZ metric @cite_27 ), statistics of channels activations @cite_6 , etc. Some methods improve fine-tuning by compensating removal of elements through changing the remaining weights in the network: by using linear least squares to approximate the output of the original layer in @math metric @cite_12 @cite_17 ; or finding paired channels with similar weights and updating weights of one channel to compensate for the removal of the other @cite_14 . Another way to help fine-tuning is by making pruning reversible ( splicing''): @cite_25 @cite_22 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_7", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_27", "@cite_2", "@cite_5", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "1845051632", "992687842", "", "", "", "", "2707890836", "2123469553", "2495425901", "", "", "2507318699", "2950837708", "" ], "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "Deep Neural nets (NNs) with millions of parameters are at the heart of many state-of-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85 of the total parameters in an MNIST-trained network, and about 35 for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network.", "", "", "", "", "We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.", "A major challenge in biometrics is performing the test at the client side, where hardware resources are often limited. Deep learning approaches pose a unique challenge: while such architectures dominate the field of face recognition with regard to accuracy, they require elaborate, multi-stage computations. Recently, there has been some work on compressing networks for the purpose of reducing run time and network size. However, it is not clear that these compression methods would work in deep face nets, which are, generally speaking, less redundant than the object recognition networks, i.e., they are already relatively lean. We propose two novel methods for compression: one based on eliminating lowly active channels and the other on coupling pruning with repeated use of already computed elements. Pruning of entire channels is an appealing idea, since it leads to direct saving in run time in almost every reasonable architecture.", "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.", "", "", "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of @math and @math respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at this https URL", "Deep convolutional neural networks (CNNs) are indispensable to state-of-the-art computer vision algorithms. However, they are still rarely deployed on battery-powered mobile devices, such as smartphones and wearable gadgets, where vision algorithms can enable many revolutionary real-world applications. The key limiting factor is the high energy consumption of CNN processing due to its high computational complexity. While there are many previous efforts that try to reduce the CNN model size or amount of computation, we find that they do not necessarily result in lower energy consumption, and therefore do not serve as a good metric for energy cost estimation. To close the gap between CNN design and energy consumption optimization, we propose an energy-aware pruning algorithm for CNNs that directly uses energy consumption estimation of a CNN to guide the pruning process. The energy estimation methodology uses parameters extrapolated from actual hardware measurements that target realistic battery-powered system setups. The proposed layer-by-layer pruning algorithm also prunes more aggressively than previously proposed pruning methods by minimizing the error in output feature maps instead of filter weights. For each layer, the weights are first pruned and then locally fine-tuned with a closed-form least-square solution to quickly restore the accuracy. After all layers are pruned, the entire network is further globally fine-tuned using back-propagation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1 top-5 accuracy loss. Finally, we show that pruning the AlexNet with a reduced number of target classes can greatly decrease the number of weights but the energy reduction is limited. Energy modeling tool and energy-aware pruned models available at this http URL", "" ] }
1812.07060
2905308012
Neural network pruning is an important step in design process of efficient neural networks for edge devices with limited computational power. Pruning is a form of knowledge transfer from the weights of the original network to a smaller target subnetwork. We propose a new method for compute-constrained structured channel-wise pruning of convolutional neural networks. The method iteratively fine-tunes the network, while gradually tapering the computation resources available to the pruned network via a holonomic constraint in the method of Lagrangian multipliers framework. An explicit and adaptive automatic control over the rate of tapering is provided. The trainable parameters of our pruning method are separate from the weights of the neural network, which allows us to avoid the interference with the neural network solver (e.g. avoid the direct dependence of pruning speed on neural network learning rates). Our method combines the rigoristic' approach by the direct application of constrained optimization, avoiding the pitfalls of ADMM-based methods, like their need to define the target amount of resources for each pruning run, and direct dependence of pruning speed and priority of pruning on the relative scale of weights between layers. For VGG-16 @ ILSVRC-2012, we achieve reduction of 15.47 -> 3.87 GMAC with only 1 top-1 accuracy reduction (68.4 -> 67.4 ). For AlexNet @ ILSVRC-2012, we achieve 0.724 -> 0.411 GMAC with 1 top-1 accuracy reduction (56.8 -> 55.8 ).
Fisher pruning'' @cite_15 resembles these heuristic'' methods, but its salience is based on the method of Lagrange multipliers, which makes this method resource-aware and less heuristic. It removes a channel every pruning iteration, so its pruning speed is fixed and doesn't slow down.
{ "cite_N": [ "@cite_15" ], "mid": [ "2783873922" ], "abstract": [ "Predicting human fixations from images has recently seen large improvements by leveraging deep representations which were pretrained for object recognition. However, as we show in this paper, these networks are highly overparameterized for the task of fixation prediction. We first present a simple yet principled greedy pruning method which we call Fisher pruning. Through a combination of knowledge distillation and Fisher pruning, we obtain much more runtime-efficient architectures for saliency prediction, achieving a 10x speedup for the same AUC performance as a state of the art network on the CAT2000 dataset. Speeding up single-image gaze prediction is important for many real-world applications, but it is also a crucial step in the development of video saliency models, where the amount of data to be processed is substantially larger." ] }
1812.07060
2905308012
Neural network pruning is an important step in design process of efficient neural networks for edge devices with limited computational power. Pruning is a form of knowledge transfer from the weights of the original network to a smaller target subnetwork. We propose a new method for compute-constrained structured channel-wise pruning of convolutional neural networks. The method iteratively fine-tunes the network, while gradually tapering the computation resources available to the pruned network via a holonomic constraint in the method of Lagrangian multipliers framework. An explicit and adaptive automatic control over the rate of tapering is provided. The trainable parameters of our pruning method are separate from the weights of the neural network, which allows us to avoid the interference with the neural network solver (e.g. avoid the direct dependence of pruning speed on neural network learning rates). Our method combines the rigoristic' approach by the direct application of constrained optimization, avoiding the pitfalls of ADMM-based methods, like their need to define the target amount of resources for each pruning run, and direct dependence of pruning speed and priority of pruning on the relative scale of weights between layers. For VGG-16 @ ILSVRC-2012, we achieve reduction of 15.47 -> 3.87 GMAC with only 1 top-1 accuracy reduction (68.4 -> 67.4 ). For AlexNet @ ILSVRC-2012, we achieve 0.724 -> 0.411 GMAC with 1 top-1 accuracy reduction (56.8 -> 55.8 ).
The method from @cite_23 trains channel scaling factors to simulate channel granularity pruning, however the factors are not limited to @math range. The factors are updated with an SGD-like method called ISTA, that includes a sparsity-inducing @math regularization term resembling the Lagrangian term, which also makes this method resource-aware.
{ "cite_N": [ "@cite_23" ], "mid": [ "2786054724" ], "abstract": [ "Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions on resource-limited scenarios. A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs), which does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computational difficult and not always useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: the first being to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels being constant, and the second being to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interesting aspects and the competitive performance." ] }
1812.07060
2905308012
Neural network pruning is an important step in design process of efficient neural networks for edge devices with limited computational power. Pruning is a form of knowledge transfer from the weights of the original network to a smaller target subnetwork. We propose a new method for compute-constrained structured channel-wise pruning of convolutional neural networks. The method iteratively fine-tunes the network, while gradually tapering the computation resources available to the pruned network via a holonomic constraint in the method of Lagrangian multipliers framework. An explicit and adaptive automatic control over the rate of tapering is provided. The trainable parameters of our pruning method are separate from the weights of the neural network, which allows us to avoid the interference with the neural network solver (e.g. avoid the direct dependence of pruning speed on neural network learning rates). Our method combines the rigoristic' approach by the direct application of constrained optimization, avoiding the pitfalls of ADMM-based methods, like their need to define the target amount of resources for each pruning run, and direct dependence of pruning speed and priority of pruning on the relative scale of weights between layers. For VGG-16 @ ILSVRC-2012, we achieve reduction of 15.47 -> 3.87 GMAC with only 1 top-1 accuracy reduction (68.4 -> 67.4 ). For AlexNet @ ILSVRC-2012, we achieve 0.724 -> 0.411 GMAC with 1 top-1 accuracy reduction (56.8 -> 55.8 ).
Structured Probabilistic Pruning @cite_19 trains probabilities of channel removal. Every pruning iteration the probabilities are updated with a heuristic rule based on the rank of the channel across all layers by @math metric of channel weights. This requires the user to define the desired number of channels in advance. The method can provide size-quality curve based on the intermediate iterations and is not resource-aware.
{ "cite_N": [ "@cite_19" ], "mid": [ "2757143157" ], "abstract": [ "Although deep Convolutional Neural Network (CNN) has shown better performance in various computer vision tasks, its application is restricted by a significant increase in storage and computation. Among CNN simplification techniques, parameter pruning is a promising approach which aims at reducing the number of weights of various layers without intensively reducing the original accuracy. In this paper, we propose a novel progressive parameter pruning method, named Structured Probabilistic Pruning (SPP), which effectively prunes weights of convolutional layers in a probabilistic manner. Specifically, unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities. A mechanism is designed to increase and decrease pruning probabilities based on importance criteria for the training process. Experiments show that, with 4x speedup, SPP can accelerate AlexNet with only 0.3 loss of top-5 accuracy and VGG-16 with 0.8 loss of top-5 accuracy in ImageNet classification. Moreover, SPP can be directly applied to accelerate multi-branch CNN networks, such as ResNet, without specific adaptations. Our 2x speedup ResNet-50 only suffers 0.8 loss of top-5 accuracy on ImageNet. We further prove the effectiveness of our method on transfer learning task on Flower-102 dataset with AlexNet." ] }
1812.07067
2903794034
In this paper, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes, e.g., age, race, and gender. Specifically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to attributes, where the final features are less affected by the attributes. Then, expression-related features are extracted from leaf nodes. Samples are probabilistically assigned to tree nodes at different levels such that expression-related features can be learned from all samples weighted by probabilities. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on five facial expression datasets have demonstrated that the proposed PAT-CNN outperforms the baseline models by explicitly modeling attributes. More impressively, the PAT-CNN using a single model achieves the best performance for faces in the wild on the SFEW dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs.
Facial expression recognition has been extensively studied as elaborated in the recent surveys @cite_2 @cite_7 . One of the major steps in facial expression recognition is to extract features that capture the appearance and geometry changes caused by facial behavior, from either static images or dynamic sequences. These features can be roughly divided into two main categories: hand-crafted and learned features. Recently, features learned by deep CNNs have achieved promising results, especially in more challenging settings. Most of these approaches were trained from all training data, whereas attribute-related and expression-related facial appearances are intertwined in the learned features. While progress has been achieved in choices of features and classifiers, the challenge posed by subject variations remains for person-independent recognition.
{ "cite_N": [ "@cite_7", "@cite_2" ], "mid": [ "2737559518", "1965947362" ], "abstract": [ "As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has recently received significant attention. Over the past 30 years, extensive research has been conducted by psychologists and neuroscientists on various aspects of facial expression analysis using FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Such an automated process can also potentially increase the reliability, precision and temporal resolution of coding. This paper provides a comprehensive survey of research into machine analysis of facial actions. We systematically review all components of such systems: pre-processing, feature extraction and machine coding of facial actions. In addition, the existing FACS-coded facial expression databases are summarised. Finally, challenges that have to be addressed to make automatic facial action analysis applicable in real-life situations are extensively discussed. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the future of machine recognition of facial actions: what are the challenges and opportunities that researchers in the field face.", "Automatic affect analysis has attracted great interest in various contexts including the recognition of action units and basic or non-basic emotions. In spite of major efforts, there are several open questions on what the important cues to interpret facial expressions are and how to encode them. In this paper, we review the progress across a range of affect recognition applications to shed light on these fundamental questions. We analyse the state-of-the-art solutions by decomposing their pipelines into fundamental components, namely face registration, representation, dimensionality reduction and recognition. We discuss the role of these components and highlight the models and new trends that are followed in their design. Moreover, we provide a comprehensive analysis of facial representations by uncovering their advantages and limitations; we elaborate on the type of information they encode and discuss how they deal with the key challenges of illumination variations, registration errors, head-pose variations, occlusions, and identity bias. This survey allows us to identify open issues and to define future directions for designing real-world affect recognition systems." ] }
1812.07067
2903794034
In this paper, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes, e.g., age, race, and gender. Specifically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to attributes, where the final features are less affected by the attributes. Then, expression-related features are extracted from leaf nodes. Samples are probabilistically assigned to tree nodes at different levels such that expression-related features can be learned from all samples weighted by probabilities. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on five facial expression datasets have demonstrated that the proposed PAT-CNN outperforms the baseline models by explicitly modeling attributes. More impressively, the PAT-CNN using a single model achieves the best performance for faces in the wild on the SFEW dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs.
More recently, identity information is explicitly taken into consideration when learning the deep models. An identity-aware CNN @cite_15 developed an identity-sensitive contrastive loss to learn identity-related features. An Identity-Adaptive Generation (IA-gen) method @cite_20 was proposed to synthesize person-dependent facial expressions from any input facial images using six conditional Generative Adversarial Networks (cGANs); and then recognition is performed by comparing the query images and the six generated expression images, which share the same identity information. The cGAN was also used in De-expression Residue Learning (DeRL) @cite_42 to generate a neutral face image from any input image of the same identity, while the residue of the generative model contains person-independent expression information.
{ "cite_N": [ "@cite_15", "@cite_42", "@cite_20" ], "mid": [ "2730601341", "2798583514", "2805080735" ], "abstract": [ "Facial expression recognition suffers under realworldconditions, especially on unseen subjects due to highinter-subject variations. To alleviate variations introduced bypersonal attributes and achieve better facial expression recognitionperformance, a novel identity-aware convolutional neuralnetwork (IACNN) is proposed. In particular, a CNN with a newarchitecture is employed as individual streams of a bi-streamidentity-aware network. An expression-sensitive contrastive lossis developed to measure the expression similarity to ensure thefeatures learned by the network are invariant to expressionvariations. More importantly, an identity-sensitive contrastiveloss is proposed to learn identity-related information from identitylabels to achieve identity-invariant expression recognition.Extensive experiments on three public databases including aspontaneous facial expression database have shown that theproposed IACNN achieves promising results in real world.", "A facial expression is a combination of an expressive component and a neutral component of a person. In this paper, we propose to recognize facial expressions by extracting information of the expressive component through a de-expression learning procedure, called De-expression Residue Learning (DeRL). First, a generative model is trained by cGAN. This model generates the corresponding neutral face image for any input face image. We call this procedure de-expression because the expressive information is filtered out by the generative model; however, the expressive information is still recorded in the intermediate layers. Given the neutral face image, unlike previous works using pixel-level or feature-level difference for facial expression classification, our new method learns the deposition (or residue) that remains in the intermediate layers of the generative model. Such a residue is essential as it contains the expressive component deposited in the generative model from any input facial expression images. Seven public facial expression databases are employed in our experiments. With two databases (BU-4DFE and BP4D-spontaneous) for pre-training, the DeRL method has been evaluated on five databases, CK+, Oulu-CASIA, MMI, BU-3DFE, and BP4D+. The experimental results demonstrate the superior performance of the proposed method.", "Subject variation is a challenging issue for fa- cial expression recognition, especially when handling unseen subjects with small-scale lableled facial expression databases. Although transfer learning has been widely used to tackle the problem, the performance degrades on new data. In this paper, we present a novel approach (so-called IA-gen) to alleviate the issue of subject variations by regenerating expressions from any input facial images. First of all, we train conditional generative models to generate six prototypic facial expressions from any given query face image while keeping the identity related information unchanged. Generative Adversarial Networks are employed to train the conditional generative models, and each of them is designed to generate one of the prototypic facial expression images. Second, a regular CNN (FER-Net) is fine- tuned for expression classification. After the corresponding prototypic facial expressions are regenerated from each facial image, we output the last FC layer of FER-Net as features for both the input image and the generated images. Based on the minimum distance between the input image and the generated expression images in the feature space, the input image is classified as one of the prototypic expressions consequently. Our proposed method can not only alleviate the influence of inter-subject variations, but will also be flexible enough to integrate with any other FER CNNs for person-independent facial expression recognition. Our method has been evaluated on CK+, Oulu-CASIA, BU-3DFE and BU-4DFE databases, and the results demonstrate the effectiveness of our proposed method." ] }
1812.07067
2903794034
In this paper, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes, e.g., age, race, and gender. Specifically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to attributes, where the final features are less affected by the attributes. Then, expression-related features are extracted from leaf nodes. Samples are probabilistically assigned to tree nodes at different levels such that expression-related features can be learned from all samples weighted by probabilities. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on five facial expression datasets have demonstrated that the proposed PAT-CNN outperforms the baseline models by explicitly modeling attributes. More impressively, the PAT-CNN using a single model achieves the best performance for faces in the wild on the SFEW dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs.
Apart from learning identity-free expression-related features, Multi-task Learning (MTL) has been employed @cite_22 to simultaneously perform various face-related tasks including detection, alignment, pose estimation, gender recognition, age estimation, smile detection, and face recognition using a single deep CNN. To deal with the incomplete annotations and thus, insufficient and unbalanced training data for various tasks, the all-in-one framework was split into subnetworks, which were trained individually. Our approach differs significantly from the MTL @cite_22 that we jointly minimize the loss of the major task, i.e., expression recognition errors, and those of the auxiliary tasks, i.e., the PAT loss, calculated in a hierarchical tree structure. In addition, semi-supervised learning is employed in our approach to make the best use of all available data.
{ "cite_N": [ "@cite_22" ], "mid": [ "2548780814" ], "abstract": [ "We present a multi-purpose algorithm for simultaneousface detection, face alignment, pose estimation, genderrecognition, smile detection, age estimation and face recognitionusing a single deep convolutional neural network (CNN). Theproposed method employs a multi-task learning framework thatregularizes the shared parameters of CNN and builds a synergyamong different domains and tasks. Extensive experimentsshow that the network has a better understanding of face andachieves state-of-the-art result for most of these tasks" ] }
1812.07067
2903794034
In this paper, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes, e.g., age, race, and gender. Specifically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to attributes, where the final features are less affected by the attributes. Then, expression-related features are extracted from leaf nodes. Samples are probabilistically assigned to tree nodes at different levels such that expression-related features can be learned from all samples weighted by probabilities. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on five facial expression datasets have demonstrated that the proposed PAT-CNN outperforms the baseline models by explicitly modeling attributes. More impressively, the PAT-CNN using a single model achieves the best performance for faces in the wild on the SFEW dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs.
Recently, clustering has been utilized to group deep features. A recurrent framework @cite_41 updates deep features and image clusters alternatively until the number of clusters reaches the predefined value. DeepCluster @cite_38 alternatively groups the features by k-means and uses the subsequent assignments as supervision to learn the network. Deep Density Clustering (DDC) @cite_14 groups unconstrained face images based on local compact representations and a density-based similarity measure. In contrast to these unsupervised clustering methods, the proposed PAT-CNN takes advantage of available attribute annotations and thus, is capable of learning semantically-meaningful clusters that are related to facial expression recognition. Moreover, data samples are probabilistically assigned to clusters at different levels of the hierarchy to alleviate the misclassifications due to clustering errors.
{ "cite_N": [ "@cite_41", "@cite_38", "@cite_14" ], "mid": [ "2337374958", "", "2799118171" ], "abstract": [ "In this paper, we propose a recurrent framework for Joint Unsupervised LEarning (JULE) of deep representations and image clusters. In our framework, successive operations in a clustering algorithm are expressed as steps in a recurrent process, stacked on top of representations output by a Convolutional Neural Network (CNN). During training, image clusters and representations are updated jointly: image clustering is conducted in the forward pass, while representation learning in the backward pass. Our key idea behind this framework is that good representations are beneficial to image clustering and clustering results provide supervisory signals to representation learning. By integrating two processes into a single model with a unified weighted triplet loss and optimizing it end-to-end, we can obtain not only more powerful representations, but also more precise image clusters. Extensive experiments show that our method outperforms the state-of-the-art on image clustering across a variety of image datasets. Moreover, the learned representations generalize well when transferred to other tasks.", "", "In this paper, we consider the problem of grouping a collection of unconstrained face images in which the number of subjects is not known. We propose an unsupervised clustering algorithm called Deep Density Clustering (DDC) which is based on measuring density affinities between local neighborhoods in the feature space. By learning the minimal covering sphere for each neighborhood, information about the underlying structure is encapsulated. The encapsulation is also capable of locating high-density region of the neighborhood, which aids in measuring the neighborhood similarity. We theoretically show that the encapsulation asymptotically converges to a Parzen window density estimator. Our experiments show that DDC is a superior candidate for clustering unconstrained faces when the number of subjects is unknown. Unlike conventional linkage and density-based methods that are sensitive to the selection operating points, DDC attains more consistent and improved performance. Furthermore, the density-aware property reduces the difficulty in finding appropriate operating points." ] }