aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1907.05274 | 2960314297 | Autonomous vehicles (AV) have progressed rapidly with the advancements in computer vision algorithms. The deep convolutional neural network as the main contributor to this advancement has boosted the classification accuracy dramatically. However, the discovery of adversarial examples reveals the generalization gap between dataset and the real world. Furthermore, affine transformations may also confuse computer vision based object detectors. The degradation of the perception system is undesirable for safety critical systems such as autonomous vehicles. In this paper, a deep learning system is proposed: Affine Disentangled GAN (ADIS-GAN), which is robust against affine transformations and adversarial attacks. It is demonstrated that conventional data augmentation for affine transformation and adversarial attacks are orthogonal, while ADIS-GAN can handle both attacks at the same time. Useful information such as image rotation angle and scaling factor are also generated in ADIS-GAN. On MNIST dataset, ADIS-GAN can achieve over 98 percent classification accuracy within 30 degrees rotation, and over 90 percent classification accuracy against FGSM and PGD adversarial attack. | Several studies address invariance property of deep CNN for affine transformations. In @cite_17 , anti-distortion classification result is achieved by inserting Spatial Transformer layer into the given network. However the affine parameters are not presented in a disentangled manner, which makes it less interpretable. Transforming Autoencoder @cite_5 uses auto-encoder to model 2D affine transformation applied to images. The trained generative model can learn to generate transformed images in a disentangled way. However it does not tackle the improvement on classification accuracy. In @cite_16 , transformations that preserve the object identity are analyzed in the symmetry group. In @cite_18 @cite_25 , filter banks are designed to make the classifier transformation invariant. | {
"cite_N": [
"@cite_18",
"@cite_5",
"@cite_16",
"@cite_25",
"@cite_17"
],
"mid": [
"252252322",
"2535873859",
"2136026194",
"2144578488",
"2950159395"
],
"abstract": [
"Convolutional Neural Networks (ConvNets) have shown excellent results on many visual classification tasks. With the exception of ImageNet, these datasets are carefully crafted such that objects are well-aligned at similar scales. Naturally, the feature learning problem gets more challenging as the amount of variation in the data increases, as the models have to learn to be invariant to certain changes in appearance. Recent results on the ImageNet dataset show that given enough data, ConvNets can learn such invariances producing very discriminative features [1]. But could we do more: use less parameters, less data, learn more discriminative features, if certain invariances were built into the learning process? In this paper we present a simple model that allows ConvNets to learn features in a locally scale-invariant manner without increasing the number of model parameters. We show on a modified MNIST dataset that when faced with scale variation, building in scale-invariance allows ConvNets to learn more discriminative features with reduced chances of over-fitting.",
"Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection.",
"The chief difficulty in object recognition is that objects' classes are obscured by a large number of extraneous sources of variability, such as pose and part deformation. These sources of variation can be represented by symmetry groups, sets of composable transformations that preserve object identity. Convolutional neural networks (convnets) achieve a degree of translational invariance by computing feature maps over the translation group, but cannot handle other groups. As a result, these groups' effects have to be approximated by small translations, which often requires augmenting datasets and leads to high sample complexity. In this paper, we introduce deep symmetry networks (symnets), a generalization of convnets that forms feature maps over arbitrary symmetry groups. Symnets use kernel-based interpolation to tractably tie parameters and pool over symmetry spaces of any dimension. Like convnets, they are trained with backpropagation. The composition of feature transformations through the layers of a symnet provides a new approach to deep learning. Experiments on NORB and MNIST-rot show that symnets over the affine group greatly reduce sample complexity relative to convnets by better capturing the symmetries in the data.",
"Learning invariant representations is an important problem in machine learning and pattern recognition. In this paper, we present a novel framework of transformation-invariant feature learning by incorporating linear transformations into the feature learning algorithms. For example, we present the transformation-invariant restricted Boltzmann machine that compactly represents data by its weights and their transformations, which achieves invariance of the feature representation via probabilistic max pooling. In addition, we show that our transformation-invariant feature learning framework can also be extended to other unsupervised learning methods, such as autoencoders or sparse coding. We evaluate our method on several image classification benchmark datasets, such as MNIST variations, CIFAR-10, and STL-10, and show competitive or superior classification performance when compared to the state-of-the-art. Furthermore, our method achieves state-of-the-art performance on phone classification tasks with the TIMIT dataset, which demonstrates wide applicability of our proposed algorithms to other domains.",
"State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust."
]
} |
1907.05321 | 2957988017 | Time is an important feature in many applications involving events that occur synchronously and or asynchronously. To effectively consume time information, recent studies have focused on designing new architectures. In this paper, we take an orthogonal but complementary approach by providing a model-agnostic vector representation for time, called Time2Vec, that can be easily imported into many existing and future architectures and improve their performances. We show on a range of models and problems that replacing the notion of time with its Time2Vec representation improves the performance of the final model. | There is a long history of algorithms for predictive modeling in time series analysis. They include auto-regressive techniques @cite_40 that predict future measurements in a sequence based on a window of past measurements. Since it is not always clear how long the window of past measurements should be, hidden Markov models @cite_54 , dynamic Bayesian networks @cite_9 , and dynamic conditional random fields @cite_46 use hidden states as a finite memory that can remember information arbitrarily far in the past. These models can be seen as special cases of recurrent neural networks @cite_19 . They typically assume that inputs are synchronous, arrive at regular time intervals, and that the underlying process is stationary with respect to time. It is possible to aggregate asynchronous events into time-bins and to use synchronous models over the bins @cite_18 @cite_56 . Asynchronous events can also be directly modeled with point processes ( , Poisson, Cox, and Hawkes point processes) @cite_13 @cite_30 @cite_29 @cite_58 @cite_57 and continuous time normalizing flows @cite_48 . Alternatively, one can also interpolate or make predictions at arbitrary time stamps with Gaussian processes @cite_31 or support vector regression @cite_51 . | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_31",
"@cite_48",
"@cite_9",
"@cite_54",
"@cite_29",
"@cite_56",
"@cite_57",
"@cite_19",
"@cite_40",
"@cite_51",
"@cite_46",
"@cite_58",
"@cite_13"
],
"mid": [
"",
"2962769218",
"1502922572",
"2963755523",
"2110575115",
"2105594594",
"2963148415",
"2788415486",
"2788849864",
"",
"2070681743",
"2137226992",
"2158823144",
"2890009182",
""
],
"abstract": [
"",
"",
"We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn the hyperparameters using the marginal likelihood. We explain the practical advantages of Gaussian Process and end with conclusions and a look at the current trends in GP work.",
"We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a blackbox differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.",
"",
"The basic theory of Markov chains has been known to mathematicians and engineers for close to 80 years, but it is only in the past decade that it has been applied explicitly to problems in speech processing. One of the major reasons why speech models, based on Markov chains, have not been developed until recently was the lack of a method for optimizing the parameters of the Markov model to match observed signal patterns. Such a method was proposed in the late 1960's and was immediately applied to speech processing in several research institutions. Continued refinements in the theory and implementation of Markov modelling techniques have greatly enhanced the method, leading to a wide range of applications of these models. It is the purpose of this tutorial paper to give an introduction to the theory of Markov models, and to illustrate how they have been applied to problems in speech recognition.",
"Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model's expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.",
"Event-driven neuromorphic spiking sensors such as the silicon retina and the silicon cochlea encode the external sensory stimuli as asynchronous streams of spikes across different channels or pixels. Combining state-of-art deep neural networks with the asynchronous outputs of these sensors has produced encouraging results on some datasets but remains challenging. While the lack of effective spiking networks to process the spike streams is one reason, the other reason is that the pre-processing methods required to convert the spike streams to frame-based features needed for the deep networks still require further investigation. This work investigates the effectiveness of synchronous and asynchronous frame-based features generated using spike count and constant event binning in combination with the use of a recurrent neural network for solving a classification task using N-TIDIGITs. This spike-based dataset consists of recordings from the Dynamic Audio Sensor, a spiking silicon cochlea sensor, in response to the TIDIGITS audio dataset. We also propose a new pre-processing method which applies an exponential kernel on the output cochlea spikes so that the interspike timing information is better preserved. The results from the N-TIDIGITS dataset show that the exponential features perform better than the spike count features, with over 91 accuracy on the digit classification task. This accuracy corresponds to an improvement of at least 2.5 over the use of spike count features, establishing a new state of the art for this dataset.",
"",
"",
"This is a preliminary report on a newly developed simple and practical procedure of statistical identification of predictors by using autoregressive models. The use of autoregressive representation of a stationary time series (or the innovations approach) in the analysis of time series has recently been attracting attentions of many research workers and it is expected that this time domain approach will give answers to many problems, such as the identification of noisy feedback systems, which could not be solved by the direct application of frequency domain approach [1], [2], [3], [9].",
"A new regression technique based on Vapnik's concept of support vectors is introduced. We compare support vector regression (SVR) with a committee regression technique (bagging) based on regression trees and ridge regression done in feature space. On the basis of these experiments, it is expected that SVR will have advantages in high dimensionality space because SVR optimization does not depend on the dimensionality of the input space.",
"In sequence modeling, we often wish to represent complex interaction between labels, such as when performing multiple, cascaded labeling tasks on the same sequence, or when long-range dependencies exist. We present dynamic conditional random fields (DCRFs), a generalization of linear-chain conditional random fields (CRFs) in which each time slice contains a set of state variables and edges---a distributed state representation as in dynamic Bayesian networks (DBNs)---and parameters are tied across slices. Since exact inference can be intractable in such models, we perform approximate inference using several schedules for belief propagation, including tree-based reparameterization (TRP). On a natural-language chunking task, we show that a DCRF performs better than a series of linear-chain CRFs, achieving comparable performance using only half the training data.",
"Many real world problems from sustainability, healthcare and Internet generate discrete events in continuous time. The generative processes of these data can be very complex, requiring flexible models to capture their dynamics. Temporal point processes offer an elegant framework for modeling such event data. However, sophisticated point process models typically leads to intractable likelihood functions, making model fitting difficult in practice. We address this challenge from the perspective of reinforcement learning (RL), and relate the intensity function of a point process to a stochastic policy in reinforcement learning. We parameterize the policy as a flexible recurrent neural network, and reward models which can mimic the observed event distribution. Since the reward function is unknown in practice, we also uncover an analytic form of the reward function using an inverse reinforcement learning formulation and functions from a reproducing kernel Hilbert space. This new RL framework allows us to derive an efficient policy gradient algorithm for learning flexible point process models, and we show that it performs well in both synthetic and real data.",
""
]
} |
1907.05321 | 2957988017 | Time is an important feature in many applications involving events that occur synchronously and or asynchronously. To effectively consume time information, recent studies have focused on designing new architectures. In this paper, we take an orthogonal but complementary approach by providing a model-agnostic vector representation for time, called Time2Vec, that can be easily imported into many existing and future architectures and improve their performances. We show on a range of models and problems that replacing the notion of time with its Time2Vec representation improves the performance of the final model. | While there is a body of literature on designing neural networks with activations @cite_0 @cite_14 @cite_8 @cite_45 @cite_23 , our work uses only for transforming time; the rest of the network uses other activations. There is also a set of techniques that consider time as yet another feature and concatenate time (or some hand designed features of time such as log and or inverse of delta time) with the input @cite_11 @cite_4 @cite_3 @cite_43 @cite_7 @cite_42 @cite_49 @cite_15 . survey several such approaches for dynamic (knowledge) graphs. These models can directly benefit from our proposed vector embedding, Time2Vec, by concatenating Time2Vec with the input instead of their time features. Other works @cite_24 @cite_21 @cite_25 @cite_35 @cite_22 @cite_58 propose new neural architectures that take into account time (or some features of time). As a proof of concept, we show how can be used in one of these architectures to better exploit temporal information; it can be potentially used in other architectures as well. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_42",
"@cite_3",
"@cite_43",
"@cite_15",
"@cite_58",
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_23",
"@cite_49",
"@cite_7",
"@cite_25",
"@cite_14",
"@cite_0",
"@cite_24",
"@cite_45",
"@cite_11"
],
"mid": [
"2741166240",
"2804072087",
"2962948632",
"2509830164",
"2742491462",
"2898104583",
"2890009182",
"",
"2142075376",
"2739805805",
"2306429512",
"2903096638",
"2804604520",
"2962817261",
"2122824685",
"1512416947",
"2547418827",
"2188755879",
"2964006392"
],
"abstract": [
"Modeling temporal sequences plays a fundamental role in various modern applications and has drawn more and more attentions in the machine learning community. Among those efforts on improving the capability to represent temporal data, the Long Short-Term Memory (LSTM) has achieved great success in many areas. Although the LSTM can capture long-range dependency in the time domain, it does not explicitly model the pattern occurrences in the frequency domain that plays an important role in tracking and predicting data points over various time cycles. We propose the State-Frequency Memory (SFM), a novel recurrent architecture that allows to separate dynamic patterns across different frequency components and their impacts on modeling the temporal contexts of input sequences. By jointly decomposing memorized dynamics into state-frequency components, the SFM is able to offer a fine-grained analysis of temporal sequences by capturing the dependency of uncovered patterns in both time and frequency domains. Evaluations on several temporal modeling tasks demonstrate the SFM can yield competitive performances, in particular as compared with the state-of-the-art LSTM models.",
"In a wide variety of applications, humans interact with a complex environment by means of asynchronous stochastic discrete events in continuous time. Can we design online interventions that will help humans achieve certain goals in such asynchronous setting? In this paper, we address the above problem from the perspective of deep reinforcement learning of marked temporal point processes, where both the actions taken by an agent and the feedback it receives from the environment are asynchronous stochastic discrete events characterized using marked temporal point processes. In doing so, we define the agent's policy using the intensity and mark distribution of the corresponding process and then derive a flexible policy gradient method, which embeds the agent's actions and the feedback it receives into real-valued vectors using deep recurrent neural networks. Our method does not make any assumptions on the functional form of the intensity and mark distribution of the feedback and it allows for arbitrarily complex reward functions. We apply our methodology to two different applications in personalized teaching and viral marketing and, using data gathered from Duolingo and Twitter, we show that it may be able to find interventions to help learners and marketers achieve their goals more effectively than alternatives.",
"",
"Large volumes of event data are becoming increasingly available in a wide variety of applications, such as healthcare analytics, smart cities and social network analysis. The precise time interval or the exact distance between two events carries a great deal of information about the dynamics of the underlying systems. These characteristics make such data fundamentally different from independently and identically distributed data and time-series data where time and space are treated as indexes rather than random variables. Marked temporal point processes are the mathematical framework for modeling event data with covariates. However, typical point process models often make strong assumptions about the generative processes of the event data, which may or may not reflect the reality, and the specifically fixed parametric assumptions also have restricted the expressive power of the respective processes. Can we obtain a more expressive model of marked temporal point processes? How can we learn such a model from massive data? In this paper, we propose the Recurrent Marked Temporal Point Process (RMTPP) to simultaneously model the event timings and the markers. The key idea of our approach is to view the intensity function of a temporal point process as a nonlinear function of the history, and use a recurrent neural network to automatically learn a representation of influences from the event history. We develop an efficient stochastic gradient algorithm for learning the model parameters which can readily scale up to millions of events. Using both synthetic and real world datasets, we show that, in the case where the true models have parametric specifications, RMTPP can learn the dynamics of such models without the need to know the actual parametric forms; and in the case where the true models are unknown, RMTPP can also learn the dynamics and achieve better predictive performance than other parametric alternatives based on particular prior assumptions.",
"In the study of various diseases, heterogeneity among patients usually leads to different progression patterns and may require different types of therapeutic intervention. Therefore, it is important to study patient subtyping, which is grouping of patients into disease characterizing subtypes. Subtyping from complex patient data is challenging because of the information heterogeneity and temporal dynamics. Long-Short Term Memory (LSTM) has been successfully used in many domains for processing sequential data, and recently applied for analyzing longitudinal patient records. The LSTM units are designed to handle data with constant elapsed times between consecutive elements of a sequence. Given that time lapse between successive elements in patient records can vary from days to months, the design of traditional LSTM may lead to suboptimal performance. In this paper, we propose a novel LSTM unit called Time-Aware LSTM (T-LSTM) to handle irregular time intervals in longitudinal patient records. We learn a subspace decomposition of the cell memory which enables time decay to discount the memory content according to the elapsed time. We propose a patient subtyping model that leverages the proposed T-LSTM in an auto-encoder to learn a powerful single representation for sequential records of patients, which are then used to cluster patients into clinical subtypes. Experiments on synthetic and real world datasets show that the proposed T-LSTM architecture captures the underlying structures in the sequences with time irregularities.",
"Graphs are essential representations of many real-world data such as social networks. Recent years have witnessed the increasing efforts made to extend the neural network models to graph-structured data. These methods, which are usually known as the graph neural networks, have been applied to advance many graphs related tasks such as reasoning dynamics of the physical system, graph classification, and node classification. Most of the existing graph neural network models have been designed for static graphs, while many real-world graphs are inherently dynamic. For example, social networks are naturally evolving as new users joining and new relations being created. Current graph neural network models cannot utilize the dynamic information in dynamic graphs. However, the dynamic information has been proven to enhance the performance of many graph analytic tasks such as community detection and link prediction. Hence, it is necessary to design dedicated graph neural networks for dynamic graphs. In this paper, we propose DGNN, a new D ynamic G raph N eural N etwork model, which can model the dynamic information as the graph evolving. In particular, the proposed framework can keep updating node information by capturing the sequential information of edges (interactions), the time intervals between edges and information propagation coherently. Experimental results on various dynamic graphs demonstrate the effectiveness of the proposed framework.",
"Many real world problems from sustainability, healthcare and Internet generate discrete events in continuous time. The generative processes of these data can be very complex, requiring flexible models to capture their dynamics. Temporal point processes offer an elegant framework for modeling such event data. However, sophisticated point process models typically leads to intractable likelihood functions, making model fitting difficult in practice. We address this challenge from the perspective of reinforcement learning (RL), and relate the intensity function of a point process to a stochastic policy in reinforcement learning. We parameterize the policy as a flexible recurrent neural network, and reward models which can mimic the observed event distribution. Since the reward function is unknown in practice, we also uncover an analytic form of the reward function using an inverse reinforcement learning formulation and functions from a reproducing kernel Hilbert space. This new RL framework allows us to derive an efficient policy gradient algorithm for learning flexible point process models, and we show that it performs well in both synthetic and real data.",
"",
"The problem of handwritten digit recognition is dealt with by multilayer feedforward neural networks with different types of neuronal activation functions. Three types of activation functions are adopted in the network, namely, the traditional sigmoid function, sinusoidal function and a periodic function that can be considered as a combination of the first two functions. To speed up the learning, as well as to reduce the network size, an extended Kalman filter algorithm with the pruning method is used to train the network. Simulation results show that periodic activation functions perform better than monotonic ones in solving multi-cluster classification problems such as handwritten digit recognition.",
"Recently, Recurrent Neural Network (RNN) solutions for recommender systems (RS) are becoming increasingly popular. The insight is that, there exist some intrinsic patterns in the sequence of users' actions, and RNN has been proved to perform excellently when modeling sequential data. In traditional tasks such as language modeling, RNN solutions usually only consider the sequential order of objects without the notion of interval. However, in RS, time intervals between users' actions are of significant importance in capturing the relations of users' actions and the traditional RNN architectures are not good at modeling them. In this paper, we propose a new LSTM variant, i.e. Time-LSTM, to model users' sequential actions. Time-LSTM equips LSTM with time gates to model time intervals. These time gates are specifically designed, so that compared to the traditional RNN solutions, Time-LSTM better captures both of users' short-term and long-term interests, so as to improve the recommendation performance. Experimental results on two real-world datasets show the superiority of the recommendation method using Time-LSTM over the traditional methods.",
"This paper presents new theoretical results on the multistability analysis of a class of recurrent neural networks with nonmonotonic activation functions and mixed time delays. Several sufficient conditions are derived for ascertaining the existence of 3 n equilibrium points and the exponential stability of 2 n equilibrium points via state space partition by using the geometrical properties of activation functions and algebraic properties of nonsingular M-matrix. Compared with existing results, the conditions herein are much more computable with one order less linear matrix inequalities. Furthermore, the attraction basins of these exponentially stable equilibrium points are estimated. It is revealed that the attraction basins of the 2 n equilibrium points can be larger than their originally partitioned subspaces. Three numerical examples are elaborated with typical nonmonotonic activation functions to substantiate the efficacy and characteristics of the theoretical results.",
"Modeling a sequence of interactions between users and items (e.g., products, posts, or courses) is crucial in domains such as e-commerce, social networking, and education to predict future interactions. Representation learning presents an attractive solution to model the dynamic evolution of user and item properties, where each user item can be embedded in a euclidean space and its evolution can be modeled by dynamic changes in embedding. However, existing embedding methods either generate static embeddings, treat users and items independently, or are not scalable. Here we present JODIE, a coupled recurrent model to jointly learn the dynamic embeddings of users and items from a sequence of user-item interactions. JODIE has three components. First, the update component updates the user and item embedding from each interaction using their previous embeddings with the two mutually-recursive Recurrent Neural Networks. Second, a novel projection component is trained to forecast the embedding of users at any future time. Finally, the prediction component directly predicts the embedding of the item in a future interaction. For models that learn from a sequence of interactions, traditional training data batching cannot be done due to complex user-user dependencies. Therefore, we present a novel batching algorithm called t-Batch that generates time-consistent batches of training data that can run in parallel, giving massive speed-up. We conduct six experiments on two prediction tasks---future interaction prediction and state change prediction---using four real-world datasets. We show that JODIE outperforms six state-of-the-art algorithms in these tasks by up to 22.4 . Moreover, we show that JODIE is highly scalable and up to 9.2x faster than comparable models. As an additional experiment, we illustrate that JODIE can predict student drop-out from courses five interactions in advance.",
"We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
"Many events occur in the world. Some event types are stochastically excited or inhibitedߞin the sense of having their probabilities elevated or decreasedߞby patterns in the sequence of previous events. Discovering such patterns can help us predict which type of event will happen next and when. We model streams of discrete events in continuous time, by constructing a neurally self-modulating multivariate point process in which the intensities of multiple event types evolve according to a novel continuous-time LSTM. This generative model allows past events to influence the future in complex and realistic ways, by conditioning future event intensities on the hidden state of a recurrent neural network that has consumed the stream of past events. Our model has desirable qualitative properties. It achieves competitive likelihood and predictive accuracy on real and synthetic datasets, including under missing-data conditions.",
"This article discusses a number of reasons why the use of nonmonotonic functions as activation functions can lead to a marked improvement in the performance of a neural network. Using a wide range of benchmarks we show that a multilayer feedforward network using sine activation functions (and an appropriate choice of initial parameters) learns much faster than one incorporating sigmoid functions-as much as 150-500 times faster when both types are trained with backpropagation. Learning speed also compares favorably with speeds reported using modified versions of the backpropagation algorithm. In addition, the computational and generalization capacity also increases.",
"The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net more » seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs. « less",
"Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.",
"This paper presents some ideas about a new neural network architecture that can be compared to a Fourier analysis when dealing periodic signals. Such architecture is based on sinusoidal activation functions with an axo-axonic architecture (1). A biological axo-axonic connection between two neurons is defined as the weight in a connection in given by the output of another third neuron. This idea can be implemented in the so called Enhanced Neural Networks (2) in which two Multilayer Perceptrons are used; the first one will output the weights that the second MLP uses to computed the desired output. This kind of neural network has universal approximation properties (3) even with lineal activation functions.",
""
]
} |
1907.05380 | 2960069057 | Super-resolution (SR) is a technique that allows increasing the resolution of a given image. Having applications in many areas, from medical imaging to consumer electronics, several SR methods have been proposed. Currently, the best performing methods are based on convolutional neural networks (CNNs) and require extensive datasets for training. However, at test time, they fail to impose consistency between the super-resolved image and the given low-resolution image, a property that classic reconstruction-based algorithms naturally enforce in spite of having poorer performance. Motivated by this observation, we propose a new framework that joins both approaches and produces images with superior quality than any of the prior methods. Although our framework requires additional computation, our experiments on Set5, Set14, and BSD100 show that it systematically produces images with better peak signal to noise ratio (PSNR) and structural similarity (SSIM) than the current state-of-the-art CNN architectures for SR. | Statistical models The work in @cite_33 models the gradient profile of an image as a Gauss-Markov random field and performs SR via maximum likelihood estimation with the constraint that downsampled solution coincides with the given LR image. @cite_25 , images are super-resolved by ensuring that the gradient of the HR image is close to a gradient profile obtained from the LR image using a similar, but parametric model, while imposing again that the downsampled solution coincides with the given LR image. | {
"cite_N": [
"@cite_25",
"@cite_33"
],
"mid": [
"2163935418",
"2061398022"
],
"abstract": [
"In this paper, we propose an image super-resolution approach using a novel generic image prior - gradient profile prior, which is a parametric prior describing the shape and the sharpness of the image gradients. Using the gradient profile prior learned from a large number of natural images, we can provide a constraint on image gradients when we estimate a hi-resolution image from a low-resolution image. With this simple but very effective prior, we are able to produce state-of-the-art results. The reconstructed hi-resolution image is sharp while has rare ringing or jaggy artifacts.",
"In this paper we propose a new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts. The method is based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images. While other solutions assume some form of smoothness, we rely on this distinctive edge dependency as our prior knowledge in order to increase image resolution. In addition to this relation we require that intensities are conserved; the output image must be identical to the input image when downsampled to the original resolution. Altogether the method consists of solving a constrained optimization problem, attempting to impose the correct edge relation and conserve local intensities with respect to the low-resolution input image. Results demonstrate the visual importance of having such edge features properly matched, and the method's capability to produce images in which sharp edges are successfully reconstructed."
]
} |
1907.05380 | 2960069057 | Super-resolution (SR) is a technique that allows increasing the resolution of a given image. Having applications in many areas, from medical imaging to consumer electronics, several SR methods have been proposed. Currently, the best performing methods are based on convolutional neural networks (CNNs) and require extensive datasets for training. However, at test time, they fail to impose consistency between the super-resolved image and the given low-resolution image, a property that classic reconstruction-based algorithms naturally enforce in spite of having poorer performance. Motivated by this observation, we propose a new framework that joins both approaches and produces images with superior quality than any of the prior methods. Although our framework requires additional computation, our experiments on Set5, Set14, and BSD100 show that it systematically produces images with better peak signal to noise ratio (PSNR) and structural similarity (SSIM) than the current state-of-the-art CNN architectures for SR. | Total variation methods A different line of work directly encodes prior knowledge into the optimization problem. For instance, @cite_6 introduced the concept of TV to capture the number of edges in an image and observed that natural images have small TV. Since then, several algorithms have been proposed to super-resolve images by minimizing TV. For example, @cite_16 addresses the problem by discretizing a differential equation that relates changes in the values of individual pixels to the curvature of level sets. More recently, with the development of nondifferentiable convex optimization methods, TV minimization has been addressed in the discrete setting by a wide range of algorithms @cite_35 @cite_4 @cite_38 @cite_26 @cite_23 . There are essentially two versions of discrete TV, according to the norm that is applied to the discretized gradient at each pixel: isotropic ( @math -norm) and anisotropic ( @math -norm); see, e.g., @cite_4 @cite_23 . Since both versions are convex, discrete TV minimization algorithms not only have small computational complexity (as matrix-vector operations can be performed via the FFT algorithm), but are also guaranteed to find a global minimizer. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_26",
"@cite_4",
"@cite_6",
"@cite_23",
"@cite_16"
],
"mid": [
"2005089986",
"2142058898",
"2023722580",
"2028349405",
"2103559027",
"2130120519",
"2127401436"
],
"abstract": [
"",
"The class of L1-regularized optimization problems has received much attention recently because of the introduction of “compressed sensing,” which allows images and signals to be reconstructed from small amounts of data. Despite this recent attention, many L1-regularized problems still remain difficult to solve, or require techniques that are very problem-specific. In this paper, we show that Bregman iteration can be used to solve a wide variety of constrained optimization problems. Using this technique, we propose a “split Bregman” method, which can solve a very broad class of L1-regularized problems. We apply this technique to the Rudin-Osher-Fatemi functional for image denoising and to a compressed sensing problem that arises in magnetic resonance imaging.",
"Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. This paper applies a smoothing technique and an accelerated first-order algorithm, both from Nesterov [Math. Program. Ser. A, 103 (2005), pp. 127-152], and demonstrates that this approach is ideally suited for solving large-scale compressed sensing reconstruction problems as (1) it is computationally efficient, (2) it is accurate and returns solutions with several correct digits, (3) it is flexible and amenable to many kinds of reconstruction problems, and (4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed state-of-the-art methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as total-variation minimization and convex programs seeking to minimize the @math norm of @math under constraints, in which @math is not diagonal. The code is available online as a free package in the MATLAB language.",
"Iterative shrinkage thresholding (1ST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these 1ST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step 1ST (TwIST) algorithms, exhibiting much faster convergence rate than 1ST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (lscrP norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.",
"A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t--- 0o the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.",
"Based on the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chiefly but not necessarily convex programs) with a particular structure. The algorithm effectively combines an alternating direction technique with a nonmonotone line search to minimize the augmented Lagrangian function at each iteration. We establish convergence for this algorithm, and apply it to solving problems in image reconstruction with total variation regularization. We present numerical results showing that the resulting solver, called TVAL3, is competitive with, and often outperforms, other state-of-the-art solvers in the field. Copyright Springer Science+Business Media New York 2013",
"Image magnification is a common problem in imaging applications, requiring interpolation to \"read between the pixels\". Although many magnification interpolation algorithms have been proposed in the literature, all methods must suffer to some degree the effects of imperfect reconstruction: false high-frequency content introduced by the underlying original sampling. Most often, these effects manifest themselves as jagged contours in the image. The paper presents a method for constrained smoothing of such artifacts that attempts to produce smooth reconstructions of the image's level curves while still maintaining image fidelity. This is similar to other iterative reconstruction algorithms and to Bayesian restoration techniques, but instead of assuming a smoothness prior for the underlying intensity function it assumes smoothness of the level curves. Results show that this technique can produce images whose error properties are equivalent to the initial approximation (interpolation) used, while their contour smoothness is both visually and quantitatively improved."
]
} |
1907.05380 | 2960069057 | Super-resolution (SR) is a technique that allows increasing the resolution of a given image. Having applications in many areas, from medical imaging to consumer electronics, several SR methods have been proposed. Currently, the best performing methods are based on convolutional neural networks (CNNs) and require extensive datasets for training. However, at test time, they fail to impose consistency between the super-resolved image and the given low-resolution image, a property that classic reconstruction-based algorithms naturally enforce in spite of having poorer performance. Motivated by this observation, we propose a new framework that joins both approaches and produces images with superior quality than any of the prior methods. Although our framework requires additional computation, our experiments on Set5, Set14, and BSD100 show that it systematically produces images with better peak signal to noise ratio (PSNR) and structural similarity (SSIM) than the current state-of-the-art CNN architectures for SR. | Alternative regularizers Natural images also have simple representations in other domains, and other regularizers have been used for SR. For example, @cite_18 uses the fact that images have sparse wavelet representations. The patches of natural images also tend to lie on a low-dimensional manifold @cite_13 , which has motivated nonlocal methods. These use as regularizers (nonconvex) functions that enforce many patches of an image to have similar features; see, e.g., @cite_2 @cite_41 . Finally, different regularizers can be combined as in @cite_3 @cite_19 . | {
"cite_N": [
"@cite_18",
"@cite_41",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_13"
],
"mid": [
"",
"2137290314",
"2187351272",
"2220123745",
"1895520453",
"2120023453"
],
"abstract": [
"",
"Super-resolution reconstruction proposes a fusion of several low-quality images into one higher quality result with better optical resolution. Classic super-resolution techniques strongly rely on the availability of accurate motion estimation for this fusion task. When the motion is estimated inaccurately, as often happens for nonglobal motion fields, annoying artifacts appear in the super-resolved outcome. Encouraged by recent developments on the video denoising problem, where state-of-the-art algorithms are formed with no explicit motion estimation, we seek a super-resolution algorithm of similar nature that will allow processing sequences with general motion patterns. In this paper, we base our solution on the Nonlocal-Means (NLM) algorithm. We show how this denoising method is generalized to become a relatively simple super-resolution algorithm with no explicit motion estimation. Results on several test movies show that the proposed method is very successful in providing super-resolution on general sequences.",
"Image super-resolution (SR) aims to recover high-resolution images from their low-resolution counterparts for improving image analysis and visualization. Interpolation methods, widely used for this purpose, often result in images with blurred edges and blocking effects. More advanced methods such as total variation (TV) retain edge sharpness during image recovery. However, these methods only utilize information from local neighborhoods, neglecting useful information from remote voxels. In this paper, we propose a novel image SR method that integrates both local and global information for effective image recovery. This is achieved by, in addition to TV, low-rank regularization that enables utilization of information throughout the image. The optimization problem can be solved effectively via alternating direction method of multipliers (ADMM). Experiments on MR images of both adult and pediatric subjects demonstrate that the proposed method enhances the details in the recovered high-resolution images, and outperforms methods such as the nearest-neighbor interpolation, cubic interpolation, iterative back projection (IBP), non-local means (NLM), and TV-based up-sampling.",
"It is difficult to design an image super-resolution algorithm that can not only preserve image edges and texture structure but also keep lower computational complexity. A new super-resolution model based on sparsity regularization in Bayesian framework is presented. The fidelity term restricts the underlying image to be consistent with the observation image in terms of the image degradation model. The sparsity regularization term constraints the underlying image with a sparse representation in a proper dictionary. The non-local self-similarity is also introduced into the model. In order to make the sparse domain better represent the underlying image, we use high-frequency features extracted from the underlying image patches for sparse representation, which increases the effectiveness of sparse modeling. The proposed method learns dictionary directly from the estimated high-resolution image patches (extracted features), and the dictionary learning and the super-resolution can be fused together naturally into one coherent and iterated process. Such a self-learning method has stronger adaptability to different images and reduces dictionary training time. Experiments demonstrate the effectiveness of the proposed method. Compared with some state-of-the-art methods, the proposed method can better preserve image edges and texture details.",
"This article proposes a new framework to regularize linear inverse problems using the total variation on non-local graphs. This non-local graph allows to adapt the penalization to the geometry of the underlying function to recover. A fast algorithm computes iteratively both the solution of the regularization process and the non-local graph adapted to this solution. We show numerical applications of this method to the resolution of image processing inverse problems such as inpainting, super-resolution and compressive sampling.",
"Recently, there has been a great deal of interest in modeling the non-Gaussian structures of natural images. However, despite the many advances in the direction of sparse coding and multi-resolution analysis, the full probability distribution of pixels values in a neighborhood has not yet been described. In this study, we explore the space of data points representing the values of 3 × 3 high-contrast patches from optical and 3D range images. We find that the distribution of data is extremely “sparse” with the majority of the data points concentrated in clusters and non-linear low-dimensional manifolds. Furthermore, a detailed study of probability densities allows us to systematically distinguish between images of different modalities (optical versus range), which otherwise display similar marginal distributions. Our work indicates the importance of studying the full probability distribution of natural images, not just marginals, and the need to understand the intrinsic dimensionality and nature of the data. We believe that object-like structures in the world and the sensor properties of the probing device generate observations that are concentrated along predictable shapes in state space. Our study of natural image statistics accounts for local geometries (such as edges) in natural scenes, but does not impose such strong assumptions on the data as independent components or sparse coding by linear change of bases."
]
} |
1907.05380 | 2960069057 | Super-resolution (SR) is a technique that allows increasing the resolution of a given image. Having applications in many areas, from medical imaging to consumer electronics, several SR methods have been proposed. Currently, the best performing methods are based on convolutional neural networks (CNNs) and require extensive datasets for training. However, at test time, they fail to impose consistency between the super-resolved image and the given low-resolution image, a property that classic reconstruction-based algorithms naturally enforce in spite of having poorer performance. Motivated by this observation, we propose a new framework that joins both approaches and produces images with superior quality than any of the prior methods. Although our framework requires additional computation, our experiments on Set5, Set14, and BSD100 show that it systematically produces images with better peak signal to noise ratio (PSNR) and structural similarity (SSIM) than the current state-of-the-art CNN architectures for SR. | Regression methods Regression methods compute the LR-HR map using a set of basis functions whose coefficients are computed via regression. The key idea is to perform clustering on the training images, and use algorithms like kernel ridge regression (KRR) @cite_7 , Gaussian process regression @cite_10 and random forests @cite_27 to perform SR. | {
"cite_N": [
"@cite_27",
"@cite_10",
"@cite_7"
],
"mid": [
"1950594372",
"1977581467",
"2149669120"
],
"abstract": [
"The aim of single image super-resolution is to reconstruct a high-resolution image from a single low-resolution input. Although the task is ill-posed it can be seen as finding a non-linear mapping from a low to high-dimensional space. Recent methods that rely on both neighborhood embedding and sparse-coding have led to tremendous quality improvements. Yet, many of the previous approaches are hard to apply in practice because they are either too slow or demand tedious parameter tweaks. In this paper, we propose to directly map from low to high-resolution patches using random forests. We show the close relation of previous work on single image super-resolution to locally linear regression and demonstrate how random forests nicely fit into this framework. During training the trees, we optimize a novel and effective regularized objective that not only operates on the output space but also on the input space, which especially suits the regression task. During inference, our method comprises the same well-known computational efficiency that has made random forests popular for many computer vision problems. In the experimental part, we demonstrate on standard benchmarks for single image super-resolution that our approach yields highly accurate state-of-the-art results, while being fast in both training and evaluation.",
"In this paper we address the problem of producing a high-resolution image from a single low-resolution image without any external training set. We propose a framework for both magnification and deblurring using only the original low-resolution image and its blurred version. In our method, each pixel is predicted by its neighbors through the Gaussian process regression. We show that when using a proper covariance function, the Gaussian process regression can perform soft clustering of pixels based on their local structures. We further demonstrate that our algorithm can extract adequate information contained in a single low-resolution image to generate a high-resolution image with sharp edges, which is comparable to or even superior in quality to the performance of other edge-directed and example-based super-resolution algorithms. Experimental results also show that our approach maintains high-quality performance at large magnifications.",
"This paper proposes a framework for single-image super-resolution. The underlying idea is to learn a map from input low-resolution images to target high-resolution images based on example pairs of input and output images. Kernel ridge regression (KRR) is adopted for this purpose. To reduce the time complexity of training and testing for KRR, a sparse solution is found by combining the ideas of kernel matching pursuit and gradient descent. As a regularized solution, KRR leads to a better generalization than simply storing the examples as has been done in existing example-based algorithms and results in much less noisy images. However, this may introduce blurring and ringing artifacts around major edges as sharp changes are penalized severely. A prior model of a generic image class which takes into account the discontinuity property of images is adopted to resolve this problem. Comparison with existing algorithms shows the effectiveness of the proposed method."
]
} |
1907.05380 | 2960069057 | Super-resolution (SR) is a technique that allows increasing the resolution of a given image. Having applications in many areas, from medical imaging to consumer electronics, several SR methods have been proposed. Currently, the best performing methods are based on convolutional neural networks (CNNs) and require extensive datasets for training. However, at test time, they fail to impose consistency between the super-resolved image and the given low-resolution image, a property that classic reconstruction-based algorithms naturally enforce in spite of having poorer performance. Motivated by this observation, we propose a new framework that joins both approaches and produces images with superior quality than any of the prior methods. Although our framework requires additional computation, our experiments on Set5, Set14, and BSD100 show that it systematically produces images with better peak signal to noise ratio (PSNR) and structural similarity (SSIM) than the current state-of-the-art CNN architectures for SR. | CNN-based Currently, the best performing SR methods are based on CNNs. Taking inspiration in the dictionary learning algorithm in @cite_39 , SRCNN @cite_1 was one of the first CNNs performing image SR. It consists of a patch extraction layer, a representation layer of the non-linear mappings, and a final layer that outputs the reconstructed image. As most neural network architectures, SRCNN requires an extensive training set, and has to be retrained for each different upscaling factor. To overcome the latter problem, DRCN @cite_28 uses a recursive convolutional layer that is repeatedly applied to obtain SR. Building on generative adversarial networks @cite_12 , state-of-the-art SR results are obtained in @cite_8 . It proposes two networks: SRGAN, which produces photo-realistic images with high perceptual quality, and SRResNet, which achieves the best PSNR and SSIM compared to all prior SR approaches. Although CNN-based methods achieve state-of-the-art SR results, they suffer from a major shortcoming that leaves room for improvement: in general, they fail to impose consistency between the HR and LR image in the testing stage. Such consistency, however, is always enforced by reconstruction-based (specifically, optimization-based) algorithms. This observation is the main motivation behind our framework, which we introduce next. | {
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_39",
"@cite_12"
],
"mid": [
"2963470893",
"2214802144",
"54257720",
"",
"2099471712"
],
"abstract": [
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.",
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.",
"",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
} |
1907.05336 | 2959233168 | Translation-based embedding models have gained significant attention in link prediction tasks for knowledge graphs. TransE is the primary model among translation-based embeddings and is well-known for its low complexity and high efficiency. Therefore, most of the earlier works have modified the score function of the TransE approach in order to improve the performance of link prediction tasks. Nevertheless, proven theoretically and experimentally, the performance of TransE strongly depends on the loss function. Margin Ranking Loss (MRL) has been one of the earlier loss functions which is widely used for training TransE. However, the scores of positive triples are not necessarily enforced to be sufficiently small to fulfill the translation from head to tail by using relation vector (original assumption of TransE). To tackle this problem, several loss functions have been proposed recently by adding upper bounds and lower bounds to the scores of positive and negative samples. Although highly effective, previously developed models suffer from an expansion in search space for a selection of the hyperparameters (in particular the upper and lower bounds of scores) on which the performance of the translation-based models is highly dependent. In this paper, we propose a new loss function dubbed Adaptive Margin Loss (AML) for training translation-based embedding models. The formulation of the proposed loss function enables an adaptive and automated adjustment of the margin during the learning process. Therefore, instead of obtaining two values (upper bound and lower bound), only the center of a margin needs to be determined. During learning, the margin is expanded automatically until it converges. In our experiments on a set of standard benchmark datasets including Freebase and WordNet, the effectiveness of AML is confirmed for training TransE on link prediction tasks. | In order to fulfill the gap of MRL in assigning high scores to positive samples, a limited-based scoring function has been proposed @cite_17 . This method limits the score of positive samples by adding an upper-bound ( @math ). It is represented as limited-based scoring loss illustrated in fig:sota . In this way, the scores of positive samples are forced to stay before the upper bound which significantly improves the performance of translation-based KGE models @cite_17 @cite_10 @cite_3 . revises the MRL by adding a term ( @math ) to limit maximum value of positive score: The possible combination of variables for @math and @math is wide with a complexity of @math . Considering that, the setting of ( @math ) is yet a manual task in experiments, the model and the results suffer from the difficulty of finding an optimum setting by trying all possible combinations. | {
"cite_N": [
"@cite_10",
"@cite_3",
"@cite_17"
],
"mid": [
"2940576506",
"",
"2768012150"
],
"abstract": [
"Knowledge graphs (KGs), i.e. representation of information as a semantic graph, provide a significant test bed for many tasks including question answering, recommendation, and link prediction. Various amount of scholarly metadata have been made vailable as knowledge graphs from the diversity of data providers and agents. However, these high-quantities of data remain far from quality criteria in terms of completeness while growing at a rapid pace. Most of the attempts in completing such KGs are following traditional data digitization, harvesting and collaborative curation approaches. Whereas, advanced AI-related approaches such as embedding models - specifically designed for such tasks - are usually evaluated for standard benchmarks such as Freebase and Wordnet. The tailored nature of such datasets prevents those approaches to shed the lights on more accurate discoveries. Application of such models on domain-specific KGs takes advantage of enriched meta-data and provides accurate results where the underlying domain can enormously benefit. In this work, the TransE embedding model is reconciled for a specific link prediction task on scholarly metadata. The results show a significant shift in the accuracy and performance evaluation of the model on a dataset with scholarly metadata. The newly proposed version of TransE obtains 99.9 for link prediction task while original TransE gets 95 . In terms of accuracy and Hit@10, TransE outperforms other embedding models such as ComplEx, TransH and TransR experimented over scholarly knowledge graphs",
"",
"In knowledge graph embedding models, the margin-based ranking loss as the common loss function is usually used to encourage discrimination between golden triplets and incorrect triplets, which has proved effective in many translation-based models for knowledge graph embedding. However, we find that the loss function cannot ensure the fact that the scoring of correct triplets must be low enough to fulfill the translation. In this paper, we present a limit-based scoring loss to provide lower scoring of a golden triplet, and then to extend two basic translation models TransE and TransH, separately to TransE-RS and TransH-RS by combining limit-based scoring loss with margin-based ranking loss. Both the presented models have low complexities of parameters benefiting for application on large scale graphs. In experiments, we evaluate our models on two typical tasks including triplet classification and link prediction, and also analyze the scoring distributions of positive and negative triplets by different models. Experimental results show that the introduced limit-based scoring loss is effective to improve the capacities of knowledge graph embedding."
]
} |
1907.05336 | 2959233168 | Translation-based embedding models have gained significant attention in link prediction tasks for knowledge graphs. TransE is the primary model among translation-based embeddings and is well-known for its low complexity and high efficiency. Therefore, most of the earlier works have modified the score function of the TransE approach in order to improve the performance of link prediction tasks. Nevertheless, proven theoretically and experimentally, the performance of TransE strongly depends on the loss function. Margin Ranking Loss (MRL) has been one of the earlier loss functions which is widely used for training TransE. However, the scores of positive triples are not necessarily enforced to be sufficiently small to fulfill the translation from head to tail by using relation vector (original assumption of TransE). To tackle this problem, several loss functions have been proposed recently by adding upper bounds and lower bounds to the scores of positive and negative samples. Although highly effective, previously developed models suffer from an expansion in search space for a selection of the hyperparameters (in particular the upper and lower bounds of scores) on which the performance of the translation-based models is highly dependent. In this paper, we propose a new loss function dubbed Adaptive Margin Loss (AML) for training translation-based embedding models. The formulation of the proposed loss function enables an adaptive and automated adjustment of the margin during the learning process. Therefore, instead of obtaining two values (upper bound and lower bound), only the center of a margin needs to be determined. During learning, the margin is expanded automatically until it converges. In our experiments on a set of standard benchmark datasets including Freebase and WordNet, the effectiveness of AML is confirmed for training TransE on link prediction tasks. | A modified version of the two previous loss functions is introduced in our previous work @cite_10 . This approach fixes the upper-bound of positive samples ( @math ) and uses a sliding mechanism to move false negative samples towards positive samples, shown as Soft Margin in fig:sota . @math refers to embedding parameters of all entities and relations in KG as (, , ). A slack variable is used per each triple (i.e. @math , where @math refers to the @math -th triple) to enable false negative samples to slide inside the margin. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2940576506"
],
"abstract": [
"Knowledge graphs (KGs), i.e. representation of information as a semantic graph, provide a significant test bed for many tasks including question answering, recommendation, and link prediction. Various amount of scholarly metadata have been made vailable as knowledge graphs from the diversity of data providers and agents. However, these high-quantities of data remain far from quality criteria in terms of completeness while growing at a rapid pace. Most of the attempts in completing such KGs are following traditional data digitization, harvesting and collaborative curation approaches. Whereas, advanced AI-related approaches such as embedding models - specifically designed for such tasks - are usually evaluated for standard benchmarks such as Freebase and Wordnet. The tailored nature of such datasets prevents those approaches to shed the lights on more accurate discoveries. Application of such models on domain-specific KGs takes advantage of enriched meta-data and provides accurate results where the underlying domain can enormously benefit. In this work, the TransE embedding model is reconciled for a specific link prediction task on scholarly metadata. The results show a significant shift in the accuracy and performance evaluation of the model on a dataset with scholarly metadata. The newly proposed version of TransE obtains 99.9 for link prediction task while original TransE gets 95 . In terms of accuracy and Hit@10, TransE outperforms other embedding models such as ComplEx, TransH and TransR experimented over scholarly knowledge graphs"
]
} |
1907.05375 | 2958043174 | Road boundaries, or curbs, provide autonomous vehicles with essential information when interpreting road scenes and generating behaviour plans. Although curbs convey important information, they are difficult to detect in complex urban environments (in particular in comparison to other elements of the road such as traffic signs and road markings). These difficulties arise from occlusions by other traffic participants as well as changing lighting and or weather conditions. Moreover, road boundaries have various shapes, colours and structures while motion planning algorithms require accurate and precise metric information in real-time to generate their plans. In this paper, we present a real-time LIDAR-based approach for accurate curb detection around the vehicle (360 degree). Our approach deals with both occlusions from traffic and changing environmental conditions. To this end, we project 3D LIDAR pointcloud data into 2D bird's-eye view images (akin to Inverse Perspective Mapping). These images are then processed by trained deep networks to infer both visible and occluded road boundaries. Finally, a post-processing step filters detected curb segments and tracks them over time. Experimental results demonstrate the effectiveness of the proposed approach on real-world driving data. Hence, we believe that our LIDAR-based approach provides an efficient and effective way to detect visible and occluded curbs around the vehicles in challenging driving scenarios. | LIDAR-based methods often rely on more traditional information engineering techniques. In @cite_5 , a ring compression analysis on dense 3D LIDAR data followed by false-positive filters was used to detect curb points. Curb models are estimated using Least Trimmed Squares (LTS) and describe the road shape on occluded curbs. However, the approach mostly considers simple examples where curbs are on both sides of the road. Hence, this method would likely fail in more complex scenarios, such as intersections and or roads with fully occluded curbs. Work by @cite_18 uses range and intensity information from 3D LIDAR to detect visible curbs on elevation data, which fails in the presence of occluding obstacles. Similarly, @cite_3 presents a LIDAR-based method to detect visible curbs using sliding-beam segmentation followed by segment-specific curb detection, but fails to detect curbs behind obstacles. | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_3"
],
"mid": [
"2284520540",
"1977960813",
"2791426782"
],
"abstract": [
"Localization is an important component of autonomous vehicles, as it enables the accomplishment of tasks, such as path planning and navigation. Although vehicle position can be obtained by GNSS devices, they are susceptible to errors and satellite signal unavailability in urban scenarios. Several map-aided localization solution methods have been proposed in the literature, but mostly for indoor environments. Maps used for localization store relevant environmental features that are extracted by a detection method. However, many feature detection methods do not consider the presence of dynamic obstacles or occlusions in the environment, which can impair the localization performance. In order to detect curbs even in occluding scenes, we developed a method based on ring compression analysis and least trimmed squares. For road marking detection, we developed a modified version of the Otsu thresholding method to segment road painting from road surfaces. Finally, the feature detection methods were integrated with a Monte Carlo localization method to estimate the vehicle position. Experimental tests in urban streets have been used to validate the proposed approach with favorable results.",
"Road curb detection and tracking is essential for the autonomous driving of intelligent vehicles on highways and urban roads. In this paper, we present a fast and robust road curb detection algorithm using 3D lidar data and Integral Laser Points (ILP) features. Range and intensity data of the 3D lidar is decomposed into elevation data and data projected on the ground plane. First, left and right road curbs are detected for each scan line using the ground projected range and intensity data and line segment features. Then, curb points of each scan line are determined using elevation data. The ILP features are proposed to speed up the both detection procedures. Finally, parabola model and RANSAC algorithm is used to fit the left and right curb points and generate vehicle controlling parameters. The proposed method and feature provide fast and reliable road curb detection speed and performance. Experiments show good results on various highways and urban roads under different situations.",
"The effective detection of curbs is fundamental and crucial for the navigation of a self-driving car. This paper presents a real-time curb detection method that automatically segments the road and detects its curbs using a 3D-LiDAR sensor. The point cloud data of the sensor are first processed to distinguish on-road and off-road areas. A sliding-beam method is then proposed to segment the road by using the off-road data. A curb-detection method is finally applied to obtain the position of curbs for each road segments. The proposed method is tested on the data sets acquired from the self-driving car of laboratory of VeCaN at Tongji University. Off-line experiments demonstrate the accuracy and robustness of the proposed method, i.e., the average recall, precision and their harmonic mean are all over 80 . Online experiments demonstrate the real-time capability for autonomous driving as the average processing time for each frame is only around 12 ms."
]
} |
1901.02404 | 2910894887 | Creating an image reflecting the content of a long text is a complex process that requires a sense of creativity. For example, creating a book cover or a movie poster based on their summary or a food image based on its recipe. In this paper we present the new task of generating images from long text that does not describe the visual content of the image directly. For this, we build a system for generating high-resolution 256 @math 256 images of food conditioned on their recipes. The relation between the recipe text (without its title) to the visual content of the image is vague, and the textual structure of recipes is complex, consisting of two sections (ingredients and instructions) both containing multiple sentences. We used the recipe1M dataset to train and evaluate our model that is based on a the StackGAN-v2 architecture. | Generating high-resolution images conditioned on text descriptions is a fundamental problem in computer vision. This problem is being studied extensively and various approaches were suggested to tackle it. Deep generative models, such as @cite_0 @cite_12 @cite_13 @cite_14 , achieved tremendous progress in this domain. In order to get high-resolution images they used multi-stage GANs, where each stage corrects defects and adds details w.r.t. the previous stage. @cite_13 , use Deep Attentional Multimodal Similarity Model (DAMSM) for text embedding. DAMSM learns two neural networks that map sub regions of the image and words of the sentence to a common semantic space. Thus, measures the image-text similarity at the word level to compute a fine-grained loss for image generation. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_13",
"@cite_12"
],
"mid": [
"2964024144",
"",
"2771088323",
"2766091292"
],
"abstract": [
"Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.",
"",
"In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14 on the CUB dataset and 170.25 on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.",
"Although Generative Adversarial Networks (GANs) have shown remarkable success in various tasks, they still face challenges in generating high quality images. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) aiming at generating high-resolution photo-realistic images. First, we propose a two-stage generative adversarial network architecture, StackGAN-v1, for text-to-image synthesis. The Stage-I GAN sketches the primitive shape and colors of the object based on given text description, yielding low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. Second, an advanced multi-stage generative adversarial network architecture, StackGAN-v2, is proposed for both conditional and unconditional generative tasks. Our StackGAN-v2 consists of multiple generators and discriminators in a tree-like structure; images at multiple scales corresponding to the same scene are generated from different branches of the tree. StackGAN-v2 shows more stable training behavior than StackGAN-v1 by jointly approximating multiple distributions. Extensive experiments demonstrate that the proposed stacked generative adversarial networks significantly outperform other state-of-the-art methods in generating photo-realistic images."
]
} |
1901.02273 | 2908623876 | Spatial transformer network has been used in a layered form in conjunction with a convolutional network to enable the model to transform data spatially. In this paper, we propose a combined spatial transformer network (STN) and a Long Short-Term Memory network (LSTM) to classify digits in sequences formed by MINST elements. This LSTM-STN model has a top-down attention mechanism profit from LSTM layer, so that the STN layer can perform short-term independent elements for the statement in the process of spatial transformation, thus avoiding the distortion that may be caused when the entire sequence is spatially transformed. It also avoids the influence of this distortion on the subsequent classification process using convolutional neural networks and achieves a single digit error of 1.6 compared with 2.2 of Convolutional Neural Network with STN layer. | 2016 proposed an recurrent attentional networks for saliency detection to detect the objects with a top-down mechanism @cite_8 .Chen-Hsuan 2016 introduced the IC-STNs attention mechanism based on LK algorithm and Inverse Compositional Variant Method to eliminate distortion with less model capacity @cite_7 . 2014 combined a nondifferentiable attention mechanism with a RNN and used it for classification. @cite_16 2017 put forward bottom-up and top-down attention for image caption to amend CNN network and obtained improved top-down attention mechanism results @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_7",
"@cite_8"
],
"mid": [
"2737766105",
"1484210532",
"2949440248",
"2963635628"
],
"abstract": [
"",
"We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs). STNs are of interest to the vision and learning communities due to their natural ability to combine alignment and classification within the same theoretical framework. Inspired by the Inverse Compositional (IC) variant of the LK algorithm, we present Inverse Compositional Spatial Transformer Networks (IC-STNs). We demonstrate that IC-STNs can achieve better performance than conventional STNs with less model capacity; in particular, we show superior performance in pure image alignment tasks as well as joint alignment classification problems on real-world problems.",
"Convolutional-deconvolution networks can be adopted to perform end-to-end saliency detection. But, they do not work well with objects of multiple scales. To overcome such a limitation, in this work, we propose a recurrent attentional convolutional-deconvolution network (RACDNN). Using spatial transformer and recurrent network units, RACDNN is able to iteratively attend to selected image sub-regions to perform saliency refinement progressively. Besides tackling the scale problem, RACDNN can also learn context-aware features from past iterations to enhance saliency refinement in future iterations. Experiments on several challenging saliency detection datasets validate the effectiveness of RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection methods."
]
} |
1901.02453 | 2909735499 | Inverse rendering aims to estimate physical attributes of a scene, e.g., reflectance, geometry, and lighting, from image(s). Inverse rendering has been studied primarily for single objects or with methods that solve for only one of the scene attributes. We propose the first learning-based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image. Our key contribution is the Residual Appearance Renderer (RAR), which can be trained to synthesize complex appearance effects (e.g., inter-reflection, cast shadows, near-field illumination, and realistic shading), which would be neglected otherwise. This enables us to perform self-supervised learning on real data using a reconstruction loss, based on re-synthesizing the input image from the estimated components. We finetune with real data after pretraining with synthetic data. To this end, we use physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets. Experimental results show that our approach outperforms state-of-the-art methods that estimate one or more scene attributes. | For inverse rendering from a few images, most traditional optimization-based approaches make strong assumptions about statistical priors on illumination and or reflectance. A variety of sub-problems have been studied, such as intrinsic image decomposition @cite_8 , shape from shading @cite_37 @cite_30 , and BRDF estimation @cite_10 . Recently, SIRFS @cite_4 showed it is possible to factorize an image of an object or a scene into surface normals, albedo, and spherical harmonics lighting. In @cite_24 the authors use CNNs to predict the initial depth and then solve inverse rendering with an optimization. From an RGBD video, Zhang al @cite_22 proposed an optimization method to obtain reflectance and illumination of an indoor room. These optimization-based methods, although physically grounded, often do not generalize well to real images where those statistical priors are no longer valid. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_24",
"@cite_10"
],
"mid": [
"136339363",
"253426031",
"",
"",
"2099628660",
"2245606284",
"2184545030"
],
"abstract": [
"We introduce a method to jointly estimate the BRDF and geometry of an object from a single image under known, but uncontrolled, natural illumination. We show that this previously unexplored problem becomes tractable when one exploits the orientation clues embedded in the lighting environment. Intuitively, unique regions in the lighting environment act analogously to the point light sources of traditional photometric stereo; they strongly constrain the orientation of the surface patches that reflect them. The reflectance, which acts as a bandpass filter on the lighting environment, determines the necessary scale of such regions. Accurate reflectance estimation, however, relies on accurate surface orientation information. Thus, these two factors must be estimated jointly. To do so, we derive a probabilistic formulation and introduce priors to address situations where the reflectance and lighting environment do not sufficiently constrain the geometry of the object. Through extensive experimentation we show what this space looks like, and offer insights into what problems become solvable in various categories of real-world natural illumination environments.",
"Shape From Shading is the process of computing the three-dimensional shape of a surface from one image of that surface. Contrary to most of the other three-dimensional reconstruction problems (for example, stereo and photometric stereo), in the Shape From Shading problem, data are minimal (we use a single image!). As a consequence, this inverse problem is intrinsically a difficult one. In this chapter we describe the main difficulties of the problem and the most recent theoretical results. We also give some examples of realistic modeling and of rigorous numerical methods.",
"",
"",
"We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.",
"Intrinsic image decomposition factorizes an observed image into its physical causes. This is most commonly framed as a decomposition into reflectance and shading, although recent progress has made full decompositions into shape, illumination, reflectance, and shading possible. However, existing factorization approaches require depth sensing to initialize the optimization of scene intrinsics. Rather than relying on depth sensors, we show that depth estimated purely from monocular appearance can provide sufficient cues for intrinsic image analysis. Our full intrinsic pipeline regresses depth by a fully convolutional network then jointly optimizes the intrinsic factorization to recover the input image. This combination yields full decompositions by uniting feature learning through deep network regression with physical modeling through statistical priors and random field regularization. This work demonstrates the first pipeline for full intrinsic decomposition of scenes from a single color image input alone.",
"The appearance of an object in an image encodes invaluable information about that object and the surrounding scene. Inferring object reflectance and scene illumination from an image would help us decode this information: reflectance can reveal important properties about the materials composing an object; the illumination can tell us, for instance, whether the scene is indoors or outdoors. Recovering reflectance and illumination from a single image in the real world, however, is a difficult task. Real scenes illuminate objects from every visible direction and real objects vary greatly in reflectance behavior. In addition, the image formation process introduces ambiguities, like color constancy, that make reversing the process ill-posed. To address this problem, we propose a Bayesian framework for joint reflectance and illumination inference in the real world. We develop a reflectance model and priors that precisely capture the space of real-world object reflectance and a flexible illumination model that can represent real-world illumination with priors that combat the deleterious effects of image formation. We analyze the performance of our approach on a set of synthetic data and demonstrate results on real-world scenes. These contributions enable reliable reflectance and illumination inference in the real world."
]
} |
1901.02453 | 2909735499 | Inverse rendering aims to estimate physical attributes of a scene, e.g., reflectance, geometry, and lighting, from image(s). Inverse rendering has been studied primarily for single objects or with methods that solve for only one of the scene attributes. We propose the first learning-based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image. Our key contribution is the Residual Appearance Renderer (RAR), which can be trained to synthesize complex appearance effects (e.g., inter-reflection, cast shadows, near-field illumination, and realistic shading), which would be neglected otherwise. This enables us to perform self-supervised learning on real data using a reconstruction loss, based on re-synthesizing the input image from the estimated components. We finetune with real data after pretraining with synthetic data. To this end, we use physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets. Experimental results show that our approach outperforms state-of-the-art methods that estimate one or more scene attributes. | With recent advances in deep learning, researchers have proposed to learn data-driven priors to solve some of these inverse problems with CNNs, many of which have achieved promising results. For example, it is shown that depth and normals may be estimated from a single image @cite_48 @cite_23 @cite_0 or multiple images @cite_7 . Parametric BRDF may be estimated either from an RGBD sequence of an object @cite_14 @cite_13 or for planar surfaces @cite_31 . Lighting may also be estimated from images, either as an environment map @cite_42 @cite_28 , or spherical harmonics @cite_45 or point lights @cite_39 . Recent works @cite_16 @cite_15 also performed inverse rendering on faces. Some recent works also jointly learn some of the intrinsic components of an object, like reflectance and illumination @cite_33 @cite_1 , reflectance and shape @cite_29 , and normal, BRDF, and distant lighting @cite_25 @cite_47 . Nevertheless, these efforts are mainly limited to objects rather than scenes, and do not model the aforementioned residual appearance effects such as inter-reflection, near-field lighting, and cast shadows present in real images. | {
"cite_N": [
"@cite_47",
"@cite_14",
"@cite_33",
"@cite_7",
"@cite_15",
"@cite_28",
"@cite_48",
"@cite_29",
"@cite_42",
"@cite_1",
"@cite_39",
"@cite_0",
"@cite_45",
"@cite_23",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"2962818260",
"2798567105",
"2324727884",
"2805635917",
"",
"2554856610",
"",
"",
"2605060370",
"2963970120",
"2798640928",
"",
"2963390109",
"",
"2963650738",
"2607170299",
"2962741401",
""
],
"abstract": [
"The ability to edit materials of objects in images is desirable by many content creators. However, this is an extremely challenging task as it requires to disentangle intrinsic physical properties of an image. We propose an end-to-end network architecture that replicates the forward image formation process to accomplish this task. Specifically, given a single image, the network first predicts intrinsic properties, i.e. shape, illumination, and material, which are then provided to a rendering layer. This layer performs in-network image synthesis, thereby enabling the network to understand the physics behind the image formation process. The proposed rendering layer is fully differentiable, supports both diffuse and specular materials, and thus can be applicable in a variety of problem settings. We demonstrate a rich set of visually plausible material editing examples and provide an extensive comparative study.",
"We present the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input. In addition to Lambertian surface properties, our approach fully automatically computes the specular albedo, material shininess, and a foreground segmentation. We tackle this challenging and ill-posed inverse rendering problem using recent advances in image-to-image translation techniques based on deep convolutional encoder-decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo. In addition, we propose a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time. The estimation of material parameters at real-time frame rates enables exciting mixed-reality applications, such as seamless illumination-consistent integration of virtual objects into real-world scenes, and virtual material cloning. We demonstrate our approach in a live setup, compare it to the state of the art, and demonstrate its effectiveness through quantitative and qualitative evaluation.",
"In this paper we are extracting surface reflectance and natural environmental illumination from a reflectance map, i.e. from a single 2D image of a sphere of one material under one illumination. This is a notoriously difficult problem, yet key to various re-rendering applications. With the recent advances in estimating reflectance maps from 2D images their further decomposition has become increasingly relevant. To this end, we propose a Convolutional Neural Network (CNN) architecture to reconstruct both material parameters (i.e. Phong) as well as illumination (i.e. high-resolution spherical illumination maps), that is solely trained on synthetic data. We demonstrate that decomposition of synthetic as well as real photographs of reflectance maps, both in High Dynamic Range (HDR), and, for the first time, on Low Dynamic Range (LDR) as well. Results are compared to previous approaches quantitatively as well as qualitatively in terms of re-renderings where illumination, material, view or shape are changed.",
"",
"",
"We present a CNN-based technique to estimate high-dynamic range outdoor illumination from a single low dynamic range image. To train the CNN, we leverage a large dataset of outdoor panoramas. We fit a low-dimensional physically-based outdoor illumination model to the skies in these panoramas giving us a compact set of parameters (including sun position, atmospheric conditions, and camera parameters). We extract limited field-of-view images from the panoramas, and train a CNN with this large set of input image–output lighting parameter pairs. Given a test image, this network can be used to infer illumination parameters that can, in turn, be used to reconstruct an outdoor illumination environment map. We demonstrate that our approach allows the recovery of plausible illumination conditions and enables photorealistic virtual object insertion from a single image. An extensive evaluation on both the panorama dataset and captured HDR environment maps shows that our technique significantly outperforms previous solutions to this problem.",
"",
"",
"We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.",
"Faithful manipulation of shape, material, and illumination in 2D Internet images would greatly benefit from a reliable factorization of appearance into material (i.e. diffuse and specular) and illumination (i.e. environment maps). On the one hand, current methods that produce very high fidelity results, typically require controlled settings, expensive devices, or significant manual effort. To the other hand, methods that are automatic and work on 'in the wild' Internet images, often extract only low-frequency lighting or diffuse materials. In this work, we propose to make use of a set of photographs in order to jointly estimate the non-diffuse materials and sharp lighting in an uncontrolled setting. Our key observation is that seeing multiple instances of the same material under different illumination (i.e. environment), and different materials under the same illumination provide valuable constraints that can be exploited to yield a high-quality solution (i.e. specular materials and environment illumination) for all the observed materials and environments. Similar constraints also arise when observing multiple materials in a single environment, or a single material across multiple environments. Technically, we enable this by a novel scalable formulation using parametric mixture models that allows for simultaneous estimation of all materials and illumination directly from a set of (uncontrolled) Internet images. The core of this approach is an optimization procedure that uses two neural networks that are trained on synthetic images to predict good gradients in parametric space given observation of reflected light. We evaluate our method on a range of synthetic and real examples to generate high-quality estimates, qualitatively compare our results against state-of-the-art alternatives via a user study, and demonstrate photo-consistent image manipulation that is otherwise very challenging to achieve.",
"We introduce the light localization problem. A scene is illuminated by a set of unobserved isotropic point lights. Given the geometry, materials, and illuminated, appearance of the scene, the light localization problem is to completely recover the number, positions, and intensities of the lights. We first present a scene transform that identifies likely light positions. Based on this transform, we develop an iterative algorithm to locate remaining lights and determine all light intensities. We demonstrate the success of this method in a large set of 2D synthetic scenes, and show that it extends to 3D, in both synthetic scenes and real-world scenes.",
"",
"Lighting estimation from faces is an important task and has applications in many areas such as image editing, intrinsic image decomposition, and image forgery detection. We propose to train a deep Convolutional Neural Network (CNN) to regress lighting parameters from a single face image. Lacking massive ground truth lighting labels for face images in the wild, we use an existing method to estimate lighting parameters, which are treated as ground truth with noise. To alleviate the effect of such noise, we utilize the idea of Generative Adversarial Networks (GAN) and propose a Label Denoising Adversarial Network (LDAN). LDAN makes use of synthetic data with accurate ground truth to help train a deep CNN for lighting regression on real face images. Experiments show that our network outperforms existing methods in producing consistent lighting parameters of different faces under similar lighting conditions. To further evaluate the proposed method, we also apply it to regress object 2D key points where ground truth labels are available. Our experiments demonstrate its effectiveness on this application.",
"",
"We propose a material acquisition approach to recover the spatially-varying BRDF and normal map of a near-planar surface from a single image captured by a handheld mobile phone camera. Our method images the surface under arbitrary environment lighting with the flash turned on, thereby avoiding shadows while simultaneously capturing high-frequency specular highlights. We train a CNN to regress an SVBRDF and surface normals from this image. Our network is trained using a large-scale SVBRDF dataset and designed to incorporate physical insights for material estimation, including an in-network rendering layer to model appearance and a material classifier to provide additional supervision during training. We refine the results from the network using a dense CRF module whose terms are designed specifically for our task. The framework is trained end-to-end and produces high quality results for a variety of materials. We provide extensive ablation studies to evaluate our network on both synthetic and real data, while demonstrating significant improvements in comparisons with prior works.",
"Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other — a process that is tedious, fragile, and computationally intensive. In this paper, we propose an end-to-end generative adversarial network that infers a face-specific disentangled representation of intrinsic face properties, including shape (i.e. normals), albedo, and lighting, and an alpha matte. We show that this network can be trained on in-the-wild images by incorporating an in-network physically-based image formation module and appropriate loss functions. Our disentangling latent representation allows for semantically relevant edits, where one aspect of facial appearance can be manipulated while keeping orthogonal properties fixed, and we demonstrate its use for a number of facial editing applications.",
"Estimating surface reflectance (BRDF) is one key component for complete 3D scene capture, with wide applications in virtual reality, augmented reality, and human computer interaction. Prior work is either limited to controlled environments (e.g., gonioreflectometers, light stages, or multi-camera domes), or requires the joint optimization of shape, illumination, and reflectance, which is often computationally too expensive (e.g., hours of running time) for real-time applications. Moreover, most prior work requires HDR images as input which further complicates the capture process. In this paper, we propose a lightweight approach for surface reflectance estimation directly from 8-bit RGB images in real-time, which can be easily plugged into any 3D scanning-and-fusion system with a commodity RGBD sensor. Our method is learning-based, with an inference time of less than 90ms per scene and a model size of less than 340K bytes. We propose two novel network architectures, HemiCNN and Grouplet, to deal with the unstructured input data from multiple viewpoints under unknown illumination. We further design a loss function to resolve the color-constancy and scale ambiguity. In addition, we have created a large synthetic dataset, SynBRDF, which comprises a total of 500K RGBD images rendered with a physically-based ray tracer under a variety of natural illumination, covering 5000 materials and 5000 shapes. SynBRDF is the first large-scale benchmark dataset for reflectance estimation. Experiments on both synthetic data and real data show that the proposed method effectively recovers surface reflectance, and outperforms prior work for reflectance estimation in uncontrolled environments.",
""
]
} |
1901.02453 | 2909735499 | Inverse rendering aims to estimate physical attributes of a scene, e.g., reflectance, geometry, and lighting, from image(s). Inverse rendering has been studied primarily for single objects or with methods that solve for only one of the scene attributes. We propose the first learning-based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image. Our key contribution is the Residual Appearance Renderer (RAR), which can be trained to synthesize complex appearance effects (e.g., inter-reflection, cast shadows, near-field illumination, and realistic shading), which would be neglected otherwise. This enables us to perform self-supervised learning on real data using a reconstruction loss, based on re-synthesizing the input image from the estimated components. We finetune with real data after pretraining with synthetic data. To this end, we use physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets. Experimental results show that our approach outperforms state-of-the-art methods that estimate one or more scene attributes. | A few recent work from the graphics community proposed differentiable Monte Carlo renderers @cite_34 @cite_26 for optimizing rendering parameters ( , camera poses, scattering parameters) for synthetic 3D scenes. Neural mesh renderer @cite_38 addressed the problem of differentiable visibility and rasterization. Our proposed RAR is in the same spirit, but its goal is to synthesize the complex appearance effects for inverse rendering on , which is significantly more challenging. | {
"cite_N": [
"@cite_38",
"@cite_34",
"@cite_26"
],
"mid": [
"2963527086",
"2902812770",
""
],
"abstract": [
"For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.",
"Gradient-based methods are becoming increasingly important for computer graphics, machine learning, and computer vision. The ability to compute gradients is crucial to optimization, inverse problems, and deep learning. In rendering, the gradient is required with respect to variables such as camera parameters, light sources, scene geometry, or material appearance. However, computing the gradient of rendering is challenging because the rendering integral includes visibility terms that are not differentiable. Previous work on differentiable rendering has focused on approximate solutions. They often do not handle secondary effects such as shadows or global illumination, or they do not provide the gradient with respect to variables other than pixel coordinates. We introduce a general-purpose differentiable ray tracer, which, to our knowledge, is the first comprehensive solution that is able to compute derivatives of scalar functions over a rendered image with respect to arbitrary scene parameters such as camera pose, scene geometry, materials, and lighting parameters. The key to our method is a novel edge sampling algorithm that directly samples the Dirac delta functions introduced by the derivatives of the discontinuous integrand. We also develop efficient importance sampling methods based on spatial hierarchies. Our method can generate gradients in times running from seconds to minutes depending on scene complexity and desired precision. We interface our differentiable ray tracer with the deep learning library PyTorch and show prototype applications in inverse rendering and the generation of adversarial examples for neural networks.",
""
]
} |
1901.02453 | 2909735499 | Inverse rendering aims to estimate physical attributes of a scene, e.g., reflectance, geometry, and lighting, from image(s). Inverse rendering has been studied primarily for single objects or with methods that solve for only one of the scene attributes. We propose the first learning-based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image. Our key contribution is the Residual Appearance Renderer (RAR), which can be trained to synthesize complex appearance effects (e.g., inter-reflection, cast shadows, near-field illumination, and realistic shading), which would be neglected otherwise. This enables us to perform self-supervised learning on real data using a reconstruction loss, based on re-synthesizing the input image from the estimated components. We finetune with real data after pretraining with synthetic data. To this end, we use physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets. Experimental results show that our approach outperforms state-of-the-art methods that estimate one or more scene attributes. | High-quality synthetic data is essential for learning-based inverse rendering. SUNCG @cite_46 created a large-scale 3D indoor scene dataset. The images of the SUNCG dataset are not photo-realistic as they are rendered with OpenGL using diffuse materials and point source lighting. An extension of this dataset, PBRS @cite_18 , uses physically based rendering to generate photo-realistic images. However, due to the computational bottleneck in ray-tracing, the rendered images are quite noisy and limited to one lighting condition. There also exist a few real-world datasets with partial labels on geometry, reflectance, or lighting. NYUv2 @cite_12 provides surface normals from indoor scenes. OpenSurface @cite_41 provides partial segmentation of objects with their glossiness properties labeled by humans. Relative reflectance judgments from humans are provided in the IIW dataset @cite_21 which are used in many intrinsic image decomposition methods. In contrast to these works, we created a large-scale synthetic dataset with significant image quality improvement. | {
"cite_N": [
"@cite_18",
"@cite_41",
"@cite_21",
"@cite_46",
"@cite_12"
],
"mid": [
"2563100679",
"2594519801",
"2076491823",
"2557465155",
""
],
"abstract": [
"Indoor scene understanding is central to applications such as robot navigation and human companion assistance. Over the last years, data-driven deep neural networks have outperformed many traditional approaches thanks to their representation learning capabilities. One of the bottlenecks in training for better representations is the amount of available per-pixel ground truth data that is required for core scene understanding tasks such as semantic segmentation, normal prediction, and object boundary detection. To address this problem, a number of works proposed using synthetic data. However, a systematic study of how such synthetic data is generated is missing. In this work, we introduce a large-scale synthetic dataset with 500K physically-based rendered images from 45K realistic 3D indoor scenes. We study the effects of rendering methods and scene lighting on training for three computer vision tasks: surface normal prediction, semantic segmentation, and object boundary detection. This study provides insights into the best practices for training with synthetic data (more realistic rendering is worth it) and shows that pretraining with our new synthetic dataset can improve results beyond the current state of the art on all three tasks.",
"A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available – current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval.",
"Intrinsic image decomposition separates an image into a reflectance layer and a shading layer. Automatic intrinsic image decomposition remains a significant challenge, particularly for real-world scenes. Advances on this longstanding problem have been spurred by public datasets of ground truth data, such as the MIT Intrinsic Images dataset. However, the difficulty of acquiring ground truth data has meant that such datasets cover a small range of materials and objects. In contrast, real-world scenes contain a rich range of shapes and materials, lit by complex illumination. In this paper we introduce Intrinsic Images in the Wild, a large-scale, public dataset for evaluating intrinsic image decompositions of indoor scenes. We create this benchmark through millions of crowdsourced annotations of relative comparisons of material properties at pairs of points in each scene. Crowdsourcing enables a scalable approach to acquiring a large database, and uses the ability of humans to judge material comparisons, despite variations in illumination. Given our database, we develop a dense CRF-based intrinsic image algorithm for images in the wild that outperforms a range of state-of-the-art intrinsic image algorithms. Intrinsic image decomposition remains a challenging problem; we release our code and database publicly to support future research on this problem, available online at http: intrinsic.cs.cornell.edu .",
"This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http: sscnet.cs.princeton.edu.",
""
]
} |
1901.02453 | 2909735499 | Inverse rendering aims to estimate physical attributes of a scene, e.g., reflectance, geometry, and lighting, from image(s). Inverse rendering has been studied primarily for single objects or with methods that solve for only one of the scene attributes. We propose the first learning-based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image. Our key contribution is the Residual Appearance Renderer (RAR), which can be trained to synthesize complex appearance effects (e.g., inter-reflection, cast shadows, near-field illumination, and realistic shading), which would be neglected otherwise. This enables us to perform self-supervised learning on real data using a reconstruction loss, based on re-synthesizing the input image from the estimated components. We finetune with real data after pretraining with synthetic data. To this end, we use physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets. Experimental results show that our approach outperforms state-of-the-art methods that estimate one or more scene attributes. | Intrinsic image decomposition is a sub-problem of inverse rendering, where a single image is decomposed into albedo and shading. Recent methods learn intrinsic image decomposition from labeled synthetic data @cite_9 @cite_40 @cite_25 and from unlabeled @cite_11 or partially labeled real data @cite_6 @cite_35 @cite_2 @cite_21 . Intrinsic image decomposition methods do not explicitly recover geometry, illumination or glossiness of the material, but rather combine them together as shading. In contrast, our goal is to perform a complete inverse rendering which has a wider range of applications in AR VR. | {
"cite_N": [
"@cite_35",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_40",
"@cite_2",
"@cite_25",
"@cite_11"
],
"mid": [
"2888277922",
"2963662484",
"2076491823",
"",
"2193045528",
"2583983165",
"",
"2964321964"
],
"abstract": [
"Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire. We explore the use of synthetic data for training CNN-based intrinsic image decomposition models, then applying these learned models to real-world images. To that end, we present CGIntrinsics, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions. The rendering process we use is carefully designed to yield high-quality, realistic images, which we find to be crucial for this problem domain. We also propose a new end-to-end training method that learns better decompositions by leveraging CGIntrinsics, and optionally IIW and SAW, two recent datasets of sparse annotations on real-world images. Surprisingly, we find that a decomposition network trained solely on our synthetic data outperforms the state-of-the-art on both IIW and SAW, and performance improves even further when IIW and SAW data is added during training. Our work demonstrates the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task.",
"We present a new deep supervised learning method for intrinsic decomposition of a single image into its albedo and shading components. Our contributions are based on a new fully convolutional neural network that estimates absolute albedo and shading jointly. Our solution relies on a single end-to-end deep sequence of residual blocks and a perceptually-motivated metric formed by two adversarially trained discriminators. As opposed to classical intrinsic image decomposition work, it is fully data-driven, hence does not require any physical priors like shading smoothness or albedo sparsity, nor does it rely on geometric information such as depth. Compared to recent deep learning techniques, we simplify the architecture, making it easier to build and train, and constrain it to generate a valid and reversible decomposition. We rediscuss and augment the set of quantitative metrics so as to account for the more challenging recovery of non scale-invariant quantities. We train and demonstrate our architecture on the publicly available MPI Sintel dataset and its intrinsic image decomposition, show attenuated overfitting issues and discuss generalizability to other data. Results show that our work outperforms the state of the art deep algorithms both on the qualitative and quantitative aspect.",
"Intrinsic image decomposition separates an image into a reflectance layer and a shading layer. Automatic intrinsic image decomposition remains a significant challenge, particularly for real-world scenes. Advances on this longstanding problem have been spurred by public datasets of ground truth data, such as the MIT Intrinsic Images dataset. However, the difficulty of acquiring ground truth data has meant that such datasets cover a small range of materials and objects. In contrast, real-world scenes contain a rich range of shapes and materials, lit by complex illumination. In this paper we introduce Intrinsic Images in the Wild, a large-scale, public dataset for evaluating intrinsic image decompositions of indoor scenes. We create this benchmark through millions of crowdsourced annotations of relative comparisons of material properties at pairs of points in each scene. Crowdsourcing enables a scalable approach to acquiring a large database, and uses the ability of humans to judge material comparisons, despite variations in illumination. Given our database, we develop a dense CRF-based intrinsic image algorithm for images in the wild that outperforms a range of state-of-the-art intrinsic image algorithms. Intrinsic image decomposition remains a challenging problem; we release our code and database publicly to support future research on this problem, available online at http: intrinsic.cs.cornell.edu .",
"",
"We introduce a new approach to intrinsic image decomposition, the task of decomposing a single image into albedo and shading components. Our strategy, which we term direct intrinsics, is to learn a convolutional neural network (CNN) that directly predicts output albedo and shading channels from an input RGB image patch. Direct intrinsics is a departure from classical techniques for intrinsic image decomposition, which typically rely on physically-motivated priors and graph-based inference algorithms. The large-scale synthetic ground-truth of the MPI Sintel dataset plays the key role in training direct intrinsics. We demonstrate results on both the synthetic images of Sintel and the real images of the classic MIT intrinsic image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms all prior work, including methods that rely on RGB+Depth input. Direct intrinsics also generalizes across modalities, our Sintel-trained CNN produces quite reasonable decompositions on the real images of the MIT dataset. Our results indicate that the marriage of CNNs with synthetic training data may be a powerful new technique for tackling classic problems in computer vision.",
"Separating an image into reflectance and shading layers poses a challenge for learning approaches because no large corpus of precise and realistic ground truth decompositions exists. The Intrinsic Images in the Wild (IIW) dataset provides a sparse set of relative human reflectance judgments, which serves as a standard benchmark for intrinsic images. A number of methods use IIW to learn statistical dependencies between the images and their reflectance layer. Although learning plays an important role for high performance, we show that a standard signal processing technique achieves performance on par with current state-of-the-art. We propose a loss function for CNN learning of dense reflectance predictions. Our results show a simple pixel-wise decision, without any context or prior knowledge, is sufficient to provide a strong baseline on IIW. This sets a competitive baseline which only two other approaches surpass. We then develop a joint bilateral filtering method that implements strong prior knowledge about reflectance constancy. This filtering operation can be applied to any intrinsic image algorithm and we improve several previous results achieving a new state-of-the-art on IIW. Our findings suggest that the effect of learning-based approaches may have been over-estimated so far. Explicit prior knowledge is still at least as important to obtain high performance in intrinsic image decompositions.",
"",
"Single-view intrinsic image decomposition is a highly ill-posed problem, and so a promising approach is to learn from large amounts of data. However, it is difficult to collect ground truth training data at scale for intrinsic images. In this paper, we explore a different approach to learning intrinsic images: observing image sequences over time depicting the same scene under changing illumination, and learning single-view decompositions that are consistent with these changes. This approach allows us to learn without ground truth decompositions, and to instead exploit information available from multiple images when training. Our trained model can then be applied at test time to single views. We describe a new learning framework based on this idea, including new loss functions that can be efficiently evaluated over entire sequences. While prior learning-based methods achieve good performance on specific benchmarks, we show that our approach generalizes well to several diverse datasets, including MIT intrinsic images, Intrinsic Images in the Wild and Shading Annotations in the Wild.1"
]
} |
1901.02350 | 2909237195 | In recent years, face detection has experienced significant performance improvement with the boost of deep convolutional neural networks. In this report, we reimplement the state-of-the-art detector SRN and apply some tricks proposed in the recent literatures to obtain an extremely strong face detector, named VIM-FD. In specific, we exploit more powerful backbone network like DenseNet-121, revisit the data augmentation based on data-anchor-sampling proposed in PyramidBox, and use the max-in-out label and anchor matching strategy in SFD. In addition, we also introduce the attention mechanism to provide additional supervision. Over the most popular and challenging face detection benchmark, i.e., WIDER FACE, the proposed VIM-FD achieves state-of-the-art performance. | Face detection has been a challenging research field since its emergence in the 1990s. Viola and Jones pioneer to use Haar features and AdaBoost to train a face detector with promising accuracy and efficiency @cite_39 , which inspires several different approaches afterwards @cite_16 @cite_35 . Apart from those, another important work is the introduction of Deformable Part Model (DPM) @cite_22 . | {
"cite_N": [
"@cite_35",
"@cite_16",
"@cite_22",
"@cite_39"
],
"mid": [
"1984525543",
"2247274765",
"1849007038",
"2137401668"
],
"abstract": [
"Cascades of boosted ensembles have become popular in the object detection community following their highly successful introduction in the face detector of Viola and Jones. Since then, researchers have sought to improve upon the original approach by incorporating new methods along a variety of axes (e.g. alternative boosting methods, feature sets, etc.). Nevertheless, key decisions about how many hypotheses to include in an ensemble and the appropriate balance of detection and false positive rates in the individual stages are often made by user intervention or by an automatic method that produces unnecessarily slow detectors. We propose a novel method for making these decisions, which exploits the shape of the stage ROC curves in ways that have been previously ignored. The result is a detector that is significantly faster than the one produced by the standard automatic method. When this algorithm is combined with a recycling method for reusing the outputs of early stages in later ones and with a retracing method that inserts new early rejection points in the cascade, the detection speed matches that of the best hand-crafted detector. We also exploit joint distributions over several features in weak learning to improve overall detector accuracy, and explore ways to improve training time by aggressively filtering features.",
"We propose a method to address challenges in unconstrained face detection, such as arbitrary pose variations and occlusions. First, a new image feature called Normalized Pixel Difference (NPD) is proposed. NPD feature is computed as the difference to sum ratio between two pixel values, inspired by the Weber Fraction in experimental psychology. The new feature is scale invariant, bounded, and is able to reconstruct the original image. Second, we propose a deep quadratic tree to learn the optimal subset of NPD features and their combinations, so that complex face manifolds can be partitioned by the learned rules. This way, only a single soft-cascade classifier is needed to handle unconstrained face detection. Furthermore, we show that the NPD features can be efficiently obtained from a look up table, and the detection template can be easily scaled, making the proposed face detector very fast. Experimental results on three public face datasets (FDDB, GENKI, and CMU-MIT) show that the proposed method achieves state-of-the-art performance in detecting unconstrained faces with arbitrary pose variations and occlusions in cluttered scenes.",
"Face detection is a mature problem in computer vision. While diverse high performing face detectors have been proposed in the past, we present two surprising new top performance results. First, we show that a properly trained vanilla DPM reaches top performance, improving over commercial and research systems. Second, we show that a detector based on rigid templates - similar in structure to the Viola&Jones detector - can reach similar top performance on this task. Importantly, we discuss issues with existing evaluation benchmark and propose an improved procedure.",
"This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second."
]
} |
1901.02350 | 2909237195 | In recent years, face detection has experienced significant performance improvement with the boost of deep convolutional neural networks. In this report, we reimplement the state-of-the-art detector SRN and apply some tricks proposed in the recent literatures to obtain an extremely strong face detector, named VIM-FD. In specific, we exploit more powerful backbone network like DenseNet-121, revisit the data augmentation based on data-anchor-sampling proposed in PyramidBox, and use the max-in-out label and anchor matching strategy in SFD. In addition, we also introduce the attention mechanism to provide additional supervision. Over the most popular and challenging face detection benchmark, i.e., WIDER FACE, the proposed VIM-FD achieves state-of-the-art performance. | Recently, face detection has been dominated by the CNN-based methods. CascadeCNN @cite_23 improves detection accuracy by training a serious of interleaved CNN models and following work @cite_21 proposes to jointly train the cascaded CNNs to realize end-to-end optimization. EMO @cite_12 proposes an Expected Max Overlapping score to evaluate the quality of anchor matching. SAFD @cite_33 develops a scale proposal stage which automatically normalizes face sizes prior to detection. S @math AP @cite_20 pays attention to specific scales in image pyramid and valid locations in each scales layer. PCN @cite_36 proposes a cascade-style structure to rotate faces in a coarse-to-fine manner. Recent work @cite_31 designs a novel network to directly generate a clear super-resolution face from a blurry small one. | {
"cite_N": [
"@cite_33",
"@cite_36",
"@cite_21",
"@cite_23",
"@cite_31",
"@cite_12",
"@cite_20"
],
"mid": [
"2732082028",
"",
"2473640056",
"1934410531",
"2798748382",
"7746136",
"2797933562"
],
"abstract": [
"Convolutional neural network (CNN) based face detectors are inefficient in handling faces of diverse scales. They rely on either fitting a large single model to faces across a large scale range or multi-scale testing. Both are computationally expensive. We propose Scale-aware Face Detector (SAFD) to handle scale explicitly using CNN, and achieve better performance with less computation cost. Prior to detection, an efficient CNN predicts the scale distribution histogram of the faces. Then the scale histogram guides the zoom-in and zoom-out of the image. Since the faces will be approximately in uniform scale after zoom, they can be detected accurately even with much smaller CNN. Actually, more than 99 of the faces in AFW can be covered with less than two zooms per image. Extensive experiments on FDDB, MALF and AFW show advantages of SAFD.",
"",
"Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training.",
"In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.",
"Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"Fully convolutional neural network (FCN) has been dominating the game of face detection task for a few years with its congenital capability of sliding-window-searching with shared kernels, which boiled down all the redundant calculation, and most recent state-of-the-art methods such as Faster-RCNN, SSD, YOLO and FPN use FCN as their backbone. So here comes one question: Can we find a universal strategy to further accelerate FCN with higher accuracy, so could accelerate all the recent FCN-based methods? To analyze this, we decompose the face searching space into two orthogonal directions, scale' and spatial'. Only a few coordinates in the space expanded by the two base vectors indicate foreground. So if FCN could ignore most of the other points, the searching space and false alarm should be significantly boiled down. Based on this philosophy, a novel method named scale estimation and spatial attention proposal ( @math ) is proposed to pay attention to some specific scales and valid locations in the image pyramid. Furthermore, we adopt a masked-convolution operation based on the attention result to accelerate FCN calculation. Experiments show that FCN-based method RPN can be accelerated by about @math with the help of @math and masked-FCN and at the same time it can also achieve the state-of-the-art on FDDB, AFW and MALF face detection benchmarks as well."
]
} |
1901.02350 | 2909237195 | In recent years, face detection has experienced significant performance improvement with the boost of deep convolutional neural networks. In this report, we reimplement the state-of-the-art detector SRN and apply some tricks proposed in the recent literatures to obtain an extremely strong face detector, named VIM-FD. In specific, we exploit more powerful backbone network like DenseNet-121, revisit the data augmentation based on data-anchor-sampling proposed in PyramidBox, and use the max-in-out label and anchor matching strategy in SFD. In addition, we also introduce the attention mechanism to provide additional supervision. Over the most popular and challenging face detection benchmark, i.e., WIDER FACE, the proposed VIM-FD achieves state-of-the-art performance. | Additionally, face detection has inherited some achievements from generic object detectors, such as Faster R-CNN @cite_10 , SSD @cite_42 , FPN @cite_18 , RefineDet @cite_24 and RetinaNet @cite_34 . Face R-CNN @cite_30 combines Faster R-CNN with hard negative mining and achieves promising results. FaceBoxes @cite_13 introduces a CPU real-time detecotor based on SSD. Face R-FCN @cite_32 applies R-FCN in face detection and makes according improvements. The face detection model for finding tiny faces @cite_26 trains separate detectors for different scales. S @math FD @cite_5 presents multiple strategies onto SSD to compensate for the matching problem of small faces. SSH @cite_43 models the context information by large filters on each prediction module. PyramidBox @cite_4 utilizes contextual information with improved SSD network structure. FAN @cite_41 proposes an anchor-level attention into RetinaNet to detect the occluded faces. | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_41",
"@cite_42",
"@cite_32",
"@cite_24",
"@cite_43",
"@cite_5",
"@cite_34",
"@cite_10"
],
"mid": [
"",
"2964325361",
"2949533892",
"2579152745",
"2963856926",
"2769576731",
"2193145675",
"",
"2770184303",
"2963011882",
"2750317406",
"2743473392",
"2613718673"
],
"abstract": [
"",
"Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99 of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82 while prior art ranges from 29-64 ).",
"Face detection has been well studied for many years and one of remaining challenges is to detect small, blurred and partially occluded faces in uncontrolled environment. This paper proposes a novel context-assisted single shot face detector, named PyramidBox to handle the hard face detection problem. Observing the importance of the context, we improve the utilization of contextual information in the following three aspects. First, we design a novel context anchor to supervise high-level contextual feature learning by a semi-supervised method, which we call it PyramidAnchors. Second, we propose the Low-level Feature Pyramid Network to combine adequate high-level context semantic feature and Low-level facial feature together, which also allows the PyramidBox to predict faces of all scales in a single shot. Third, we introduce a context-sensitive structure to increase the capacity of prediction network to improve the final accuracy of output. In addition, we use the method of Data-anchor-sampling to augment the training samples across different scales, which increases the diversity of training data for smaller faces. By exploiting the value of context, PyramidBox achieves superior performance among the state-of-the-art over the two common face detection benchmarks, FDDB and WIDER FACE. Our code is available in PaddlePaddle: https: github.com PaddlePaddle models tree develop fluid face_detection.",
"The performance of face detection has been largely improved with the development of convolutional neural network. However, the occlusion issue due to mask and sunglasses, is still a challenging problem. The improvement on the recall of these occluded cases usually brings the risk of high false positives. In this paper, we present a novel face detector called Face Attention Network (FAN), which can significantly improve the recall of the face detection problem in the occluded case without compromising the speed. More specifically, we propose a new anchor-level attention, which will highlight the features from the face region. Integrated with our anchor assign strategy and data augmentation techniques, we obtain state-of-art results on public face detection benchmarks like WiderFace and MAFA. The code will be released for reproduction.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"",
"For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multi-task loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at this https URL",
"We introduce the Single Stage Headless (SSH) face detector. Unlike two stage proposal-classification detectors, SSH detects faces in a single stage directly from the early convolutional layers in a classification network. SSH is headless. That is, it is able to achieve state-of-the-art results while removing the “head” of its underlying classification network – i.e. all fully connected layers in the VGG-16 which contains a large number of parameters. Additionally, instead of relying on an image pyramid to detect faces with various scales, SSH is scale-invariant by design. We simultaneously detect faces with different scales in a single forward pass of the network, but from different layers. These properties make SSH fast and light-weight. Surprisingly, with a headless VGG-16, SSH beats the ResNet-101-based state-of-the-art on the WIDER dataset. Even though, unlike the current state-of-the-art, SSH does not use an image pyramid and is 5X faster. Moreover, if an image pyramid is deployed, our light-weight network achieves state-of-the-art on all subsets of the WIDER dataset, improving the AP by 2.5 . SSH also reaches state-of-the-art results on the FDDB and Pascal-Faces datasets while using a small input size, leading to a speed of 50 frames second on a GPU.",
"This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn."
]
} |
1901.02219 | 2911173632 | We consider the problem of detecting out-of-distribution (OOD) samples in deep reinforcement learning. In a value based reinforcement learning setting, we propose to use uncertainty estimation techniques directly on the agent's value estimating neural network to detect OOD samples. The focus of our work lies in analyzing the suitability of approximate Bayesian inference methods and related ensembling techniques that generate uncertainty estimates. Although prior work has shown that dropout-based variational inference techniques and bootstrap-based approaches can be used to model epistemic uncertainty, the suitability for detecting OOD samples in deep reinforcement learning remains an open question. Our results show that uncertainty estimation can be used to differentiate in- from out-of-distribution samples. Over the complete training process of the reinforcement learning agents, bootstrap-based approaches tend to produce more reliable epistemic uncertainty estimates, when compared to dropout-based approaches. | A systematic way to deal with uncertainty is via Bayesian inference. Its combination with neural networks in the form of Bayesian neural networks is realised by placing a probability distribution over the weight-values of the network @cite_12 . As calculating the exact Bayesian posterior quickly becomes computationally intractable for deep models, a popular solution are approximate inference methods @cite_22 @cite_18 @cite_14 @cite_10 @cite_11 @cite_1 @cite_13 . Another option is the construction of model ensembles, e.g., based on the idea of the statistical bootstrap @cite_3 . The resulting distribution of the ensemble predictions can then be used to approximate the uncertainty @cite_8 @cite_17 . Both approaches have been used for tasks as diverse as machine vision @cite_6 , disease detection @cite_0 , or decision making @cite_20 @cite_8 . | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_8",
"@cite_1",
"@cite_17",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_20",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2950177356",
"2951266961",
"2108677974",
"2963938771",
"2605055943",
"2963238274",
"",
"2600383743",
"2951965145",
"2404067440",
"2964059111",
"2111051539",
""
],
"abstract": [
"",
"Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights.",
"We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.",
"Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.",
"Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as '-greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.",
"To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference approximations are needed. Dropout variational inference (VI) for example has been used for machine vision and medical applications, but VI can severely underestimates model uncertainty. Alpha-divergences are alternative divergences to VI's KL objective, which are able to avoid VI's uncertainty underestimation. But these are hard to use in practice: existing techniques can only use Gaussian approximating distributions, and require existing models to be changed radically, thus are of limited use for practitioners. We propose a reparametrisation of the alpha-divergence objectives, deriving a simple inference technique which, together with dropout, can be easily implemented with existing models by simply changing the loss of the model. We demonstrate improved uncertainty estimates and accuracy compared to VI in dropout networks. We study our model's epistemic uncertainty far away from the data using adversarial images, showing that these can be distinguished from non-adversarial images by examining our model's uncertainty.",
"Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.",
"",
"There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.",
"Deep learning (DL) has revolutionized the field of computer vision and image processing. In medical imaging, algorithmic solutions based on DL have been shown to achieve high performance on tasks that previously required medical experts. However, DL-based solutions for disease detection have been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. Depending on network capacity and task dataset difficulty, we surpass 85 sensitivity and 80 specificity as recommended by the NHS when referring 0−20 of the most uncertain decisions for further inspection. We analyse causes of uncertainty by relating intuitions from 2D visualizations to the high-dimensional image space. While uncertainty is sensitive to clinically relevant cases, sensitivity to unfamiliar data samples is task dependent, but can be rendered more robust.",
"We present an algorithm for model-based reinforcement learning that combines Bayesian neural networks (BNNs) with random roll-outs and stochastic optimization for policy learning. The BNNs are trained by minimizing @math -divergences, allowing us to capture complicated statistical patterns in the transition dynamics, e.g. multi-modality and heteroskedasticity, which are usually missed by other common modeling approaches. We illustrate the performance of our method by solving a challenging benchmark where model-based approaches usually fail and by obtaining promising results in a real-world scenario for controlling a gas turbine.",
"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.",
"A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian \"evidence\" automatically embodies \"Occam's razor,\" penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.",
""
]
} |
1901.02219 | 2911173632 | We consider the problem of detecting out-of-distribution (OOD) samples in deep reinforcement learning. In a value based reinforcement learning setting, we propose to use uncertainty estimation techniques directly on the agent's value estimating neural network to detect OOD samples. The focus of our work lies in analyzing the suitability of approximate Bayesian inference methods and related ensembling techniques that generate uncertainty estimates. Although prior work has shown that dropout-based variational inference techniques and bootstrap-based approaches can be used to model epistemic uncertainty, the suitability for detecting OOD samples in deep reinforcement learning remains an open question. Our results show that uncertainty estimation can be used to differentiate in- from out-of-distribution samples. Over the complete training process of the reinforcement learning agents, bootstrap-based approaches tend to produce more reliable epistemic uncertainty estimates, when compared to dropout-based approaches. | For the case of low-dimensional feature spaces, OOD detection (also called novelty detection) is a well-researched problem. For a survey on the topic, see e.g. , who distinguish between probabilistic, distance-based, reconstruction-based, domain-based and information theoretic methods. During the last years, several new methods based on deep neural networks were proposed for high-dimensional cases, mostly focusing on classification tasks, e.g. image classification. propose a baseline for detecting OOD examples in neural networks, based on the predicted class probabilities of a softmax classifier. improve upon this baseline by using temperature scaling and by adding perturbations to the input. These methods are not directly applicable to our focus, value-based reinforcement learning, where neural networks are used for regression tasks. Other methods, especially generative-neural-network-based techniques @cite_16 could provide a solution, but at the cost of adding separate, additional components. Our approach has the benefit of not needing additional components, as it directly integrates with the neural network used for value estimation. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2599354622"
],
"abstract": [
"Obtaining models that capture imaging markers relevant for disease progression and treatment monitoring is challenging. Models are typically based on large amounts of data with annotated examples of known markers aiming at automating detection. High annotation effort and the limitation to a vocabulary of known markers limit the power of such approaches. Here, we perform unsupervised learning to identify anomalies in imaging data as candidates for markers. We propose AnoGAN, a deep convolutional generative adversarial network to learn a manifold of normal anatomical variability, accompanying a novel anomaly scoring scheme based on the mapping from image space to a latent space. Applied to new data, the model labels anomalies, and scores image patches indicating their fit into the learned distribution. Results on optical coherence tomography images of the retina demonstrate that the approach correctly identifies anomalous images, such as images containing retinal fluid or hyperreflective foci."
]
} |
1901.02446 | 2954469458 | The recently introduced panoptic segmentation task has renewed our community's interest in unifying the tasks of instance segmentation (for thing classes) and semantic segmentation (for stuff classes). However, current state-of-the-art methods for this joint task use separate and dissimilar networks for instance and semantic segmentation, without performing any shared computation. In this work, we aim to unify these methods at the architectural level, designing a single network for both tasks. Our approach is to endow Mask R-CNN, a popular instance segmentation method, with a semantic segmentation branch using a shared Feature Pyramid Network (FPN) backbone. Surprisingly, this simple baseline not only remains effective for instance segmentation, but also yields a lightweight, top-performing method for semantic segmentation. In this work, we perform a detailed study of this minimally extended version of Mask R-CNN with FPN, which we refer to as Panoptic FPN, and show it is a robust and accurate baseline for both tasks. Given its effectiveness and conceptual simplicity, we hope our method can serve as a strong baseline and aid future research in panoptic segmentation. | The joint task of thing and stuff segmentation has a rich history, including early work on scene parsing @cite_37 , image parsing @cite_33 , and holistic scene understanding @cite_1 . With the recent introduction of the joint task @cite_5 , which includes a simple task specification and carefully designed task metrics, there has been a renewed interest in the joint task. | {
"cite_N": [
"@cite_5",
"@cite_37",
"@cite_1",
"@cite_33"
],
"mid": [
"2886773299",
"2076874408",
"2137881638",
"2056860348"
],
"abstract": [
"We present a weakly supervised model that jointly performs both semantic- and instance-segmentation – a particularly relevant problem given the substantial cost of obtaining pixel-perfect annotation for these tasks. In contrast to many popular instance segmentation approaches based on object detectors, our method does not predict any overlapping instances. Moreover, we are able to segment both “thing” and “stuff” classes, and thus explain all the pixels in the image. “Thing” classes are weakly-supervised with bounding boxes, and “stuff” with image-level tags. We obtain state-of-the-art results on Pascal VOC, for both full and weak supervision (which achieves about 95 of fully-supervised performance). Furthermore, we present the first weakly-supervised results on Cityscapes for both semantic- and instance-segmentation. Finally, we use our weakly supervised framework to analyse the relationship between annotation quality and predictive performance, which is of interest to dataset creators.",
"This work proposes a method to interpret a scene by assigning a semantic label at every pixel and inferring the spatial extent of individual object instances together with their occlusion relationships. Starting with an initial pixel labeling and a set of candidate object masks for a given test image, we select a subset of objects that explain the image well and have valid overlap relationships and occlusion ordering. This is done by minimizing an integer quadratic program either using a greedy method or a standard solver. Then we alternate between using the object predictions to refine the pixel labels and vice versa. The proposed system obtains promising results on two challenging subsets of the LabelMe and SUN datasets, the largest of which contains 45, 676 images and 232 classes.",
"In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.",
"In this paper we present a Bayesian framework for parsing images into their constituent visual patterns. The parsing algorithm optimizes the posterior probability and outputs a scene representation as a \"parsing graph\", in a spirit similar to parsing sentences in speech and natural language. The algorithm constructs the parsing graph and re-configures it dynamically using a set of moves, which are mostly reversible Markov chain jumps. This computational framework integrates two popular inference approaches--generative (top-down) methods and discriminative (bottom-up) methods. The former formulates the posterior probability in terms of generative models for images defined by likelihood functions and priors. The latter computes discriminative probabilities based on a sequence (cascade) of bottom-up tests filters. In our Markov chain algorithm design, the posterior probability, defined by the generative models, is the invariant (target) probability for the Markov chain, and the discriminative probabilities are used to construct proposal probabilities to drive the Markov chain. Intuitively, the bottom-up discriminative probabilities activate top-down generative models. In this paper, we focus on two types of visual patterns--generic visual patterns, such as texture and shading, and object patterns including human faces and text. These types of patterns compete and cooperate to explain the image and so image parsing unifies image segmentation, object detection, and recognition (if we use generic visual patterns only then image parsing will correspond to image segmentation (Tu and Zhu, 2002. IEEE Trans. PAMI, 24(5):657--673). We illustrate our algorithm on natural images of complex city scenes and show examples where image segmentation can be improved by allowing object specific knowledge to disambiguate low-level segmentation cues, and conversely where object detection can be improved by using generic visual patterns to explain away shadows and occlusions."
]
} |
1901.02446 | 2954469458 | The recently introduced panoptic segmentation task has renewed our community's interest in unifying the tasks of instance segmentation (for thing classes) and semantic segmentation (for stuff classes). However, current state-of-the-art methods for this joint task use separate and dissimilar networks for instance and semantic segmentation, without performing any shared computation. In this work, we aim to unify these methods at the architectural level, designing a single network for both tasks. Our approach is to endow Mask R-CNN, a popular instance segmentation method, with a semantic segmentation branch using a shared Feature Pyramid Network (FPN) backbone. Surprisingly, this simple baseline not only remains effective for instance segmentation, but also yields a lightweight, top-performing method for semantic segmentation. In this work, we perform a detailed study of this minimally extended version of Mask R-CNN with FPN, which we refer to as Panoptic FPN, and show it is a robust and accurate baseline for both tasks. Given its effectiveness and conceptual simplicity, we hope our method can serve as a strong baseline and aid future research in panoptic segmentation. | This year's COCO and Mapillary Recognition Challenge @cite_6 @cite_19 featured panoptic segmentation tracks that proved popular. However, every competitive entry in the panoptic challenges used , with no shared computation. For details of not yet published winning entries in the 2018 COCO and Mapillary Recognition Challenge please see: http: cocodataset.org workshop coco-mapillary-eccv-2018.html . TRI-ML used separate networks for the challenge but a joint network in their recent updated tech report @cite_67 (which cites a preliminary version of our work). Our goal is to design a effective for both tasks that can serve as a baseline for future work. | {
"cite_N": [
"@cite_19",
"@cite_67",
"@cite_6"
],
"mid": [
"",
"2902499724",
"2952122856"
],
"abstract": [
"",
"We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks.",
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model."
]
} |
1901.02446 | 2954469458 | The recently introduced panoptic segmentation task has renewed our community's interest in unifying the tasks of instance segmentation (for thing classes) and semantic segmentation (for stuff classes). However, current state-of-the-art methods for this joint task use separate and dissimilar networks for instance and semantic segmentation, without performing any shared computation. In this work, we aim to unify these methods at the architectural level, designing a single network for both tasks. Our approach is to endow Mask R-CNN, a popular instance segmentation method, with a semantic segmentation branch using a shared Feature Pyramid Network (FPN) backbone. Surprisingly, this simple baseline not only remains effective for instance segmentation, but also yields a lightweight, top-performing method for semantic segmentation. In this work, we perform a detailed study of this minimally extended version of Mask R-CNN with FPN, which we refer to as Panoptic FPN, and show it is a robust and accurate baseline for both tasks. Given its effectiveness and conceptual simplicity, we hope our method can serve as a strong baseline and aid future research in panoptic segmentation. | Region-based approaches to object detection, including the Slow Fast Faster Mask R-CNN family @cite_10 @cite_27 @cite_9 @cite_7 , which apply deep networks on candidate object regions, have proven highly successful. All recent winners of the COCO detection challenges have built on Mask R-CNN @cite_0 with FPN @cite_35 , including in 2017 @cite_22 @cite_41 and 2018. @math Recent innovations include Cascade R-CNN @cite_12 , deformable convolution @cite_32 , and sync batch norm @cite_23 . In this work, the original Mask R-CNN with FPN serves as the starting point for our baseline, giving us excellent instance segmentation performance, and making our method fully compatible with these recent advances. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_7",
"@cite_41",
"@cite_9",
"@cite_32",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_10",
"@cite_12"
],
"mid": [
"2565639579",
"2963857746",
"",
"",
"",
"2601564443",
"",
"",
"2963016543",
"2102605133",
"2964241181"
],
"abstract": [
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"The way that information propagates in neural networks is of great importance. In this paper, we propose Path Aggregation Network (PANet) aiming at boosting information flow in proposal-based instance segmentation framework. Specifically, we enhance the entire feature hierarchy with accurate localization signals in lower layers by bottom-up path augmentation, which shortens the information path between lower layers and topmost feature. We present adaptive feature pooling, which links feature grid and all feature levels to make useful information in each level propagate directly to following proposal subnetworks. A complementary branch capturing different views for each proposal is created to further improve mask prediction. These improvements are simple to implement, with subtle extra computational overhead. Yet they are useful and make our PANet reach the 1st place in the COCO 2017 Challenge Instance Segmentation task and the 2nd place in Object Detection task without large-batch training. PANet is also state-of-the-art on MVD and Cityscapes.",
"",
"",
"",
"Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https: github.com msracver Deformable-ConvNets.",
"",
"",
"The development of object detection in the era of deep learning, from R-CNN [11], Fast Faster R-CNN [10, 31] to recent Mask R-CNN [14] and RetinaNet [24], mainly come from novel network, new framework, or loss design. However, mini-batch size, a key factor for the training of deep neural networks, has not been well studied for object detection. In this paper, we propose a Large Mini-Batch Object Detector (MegDet) to enable the training with a large minibatch size up to 256, so that we can effectively utilize at most 128 GPUs to significantly shorten the training time. Technically, we suggest a warmup learning rate policy and Cross-GPU Batch Normalization, which together allow us to successfully train a large mini-batch detector in much less time (e.g., from 33 hours to 4 hours), and achieve even better accuracy. The MegDet is the backbone of our submission (mmAP 52.5 ) to COCO 2017 Challenge, where we won the 1st place of Detection task.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https: github.com zhaoweicai cascade-rcnn."
]
} |
1901.02446 | 2954469458 | The recently introduced panoptic segmentation task has renewed our community's interest in unifying the tasks of instance segmentation (for thing classes) and semantic segmentation (for stuff classes). However, current state-of-the-art methods for this joint task use separate and dissimilar networks for instance and semantic segmentation, without performing any shared computation. In this work, we aim to unify these methods at the architectural level, designing a single network for both tasks. Our approach is to endow Mask R-CNN, a popular instance segmentation method, with a semantic segmentation branch using a shared Feature Pyramid Network (FPN) backbone. Surprisingly, this simple baseline not only remains effective for instance segmentation, but also yields a lightweight, top-performing method for semantic segmentation. In this work, we perform a detailed study of this minimally extended version of Mask R-CNN with FPN, which we refer to as Panoptic FPN, and show it is a robust and accurate baseline for both tasks. Given its effectiveness and conceptual simplicity, we hope our method can serve as a strong baseline and aid future research in panoptic segmentation. | In our work we adopt an encoder-decoder framework, namely FPN @cite_35 . In contrast to symmetric' decoders @cite_60 , FPN uses a lightweight decoder (see Fig. ). FPN was designed for instance segmentation, and it serves as the default backbone for Mask R-CNN. We show that . | {
"cite_N": [
"@cite_35",
"@cite_60"
],
"mid": [
"2565639579",
"1901129140"
],
"abstract": [
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net ."
]
} |
1901.02257 | 2949229857 | Commonsense Reading Comprehension (CRC) is a significantly challenging task, aiming at choosing the right answer for the question referring to a narrative passage, which may require commonsense knowledge inference. Most of the existing approaches only fuse the interaction information of choice, passage, and question in a simple combination manner from a perspective, which lacks the comparison information on a deeper level. Instead, we propose a Multi-Perspective Fusion Network (MPFN), extending the single fusion method with multiple perspectives by introducing the and fusion along with the . More comprehensive and accurate information can be captured through the three types of fusion. We design several groups of experiments on MCScript dataset Ostermann:LREC18:MCScript to evaluate the effectiveness of the three types of fusion respectively. From the experimental results, we can conclude that the difference fusion is comparable with union fusion, and the similarity fusion needs to be activated by the union fusion. The experimental result also shows that our MPFN model achieves the state-of-the-art with an accuracy of 83.52 on the official test set. | Many architectures on MRC follow the process of representation, attention, fusion, and aggregation @cite_11 @cite_20 @cite_7 @cite_19 @cite_16 @cite_3 . BiDAF @cite_11 fuses the passage-aware question, the question-aware passage, and the original passage in context layer by concatenation, and then uses a BiLSTM for aggregation. The fusion levels in current advanced models are categorized into three types by @cite_19 , including word-level fusion, high-level fusion, and self-boosted fusion. They further propose a FusionNet to fuse the attention information from bottom to top to obtain a fully-aware representation for answer span prediction. | {
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_19",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2789078277",
"2768331480",
"2964298608",
"2740747242",
"2963010846",
"2551396370"
],
"abstract": [
"",
"This paper presents a new MRC model that is capable of three key comprehension skills: 1) handling rich variations in question types; 2) understanding potential answer choices; and 3) drawing inference through multiple sentences. The model is based on the proposed MUlti-Strategy Inference for Comprehension (MUSIC) architecture, which is able to dynamically apply different attention strategies to different types of questions on the fly. By incorporating a multi-step inference engine analogous to ReasoNet (, 2017), MUSIC can also effectively perform multi-sentence inference in generating answers. Evaluation on the RACE dataset shows that the proposed method significantly outperforms previous state-of-the-art models by 7.5 in relative accuracy.",
"This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of \"History of Word\" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it identifies an attention scoring function that better utilizes the \"history of word\" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6 to 51.4 ; on AddOneSent, FusionNet boosts the best F1 metric from 56.0 to 60.7 .",
"",
"Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning, using rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we introduce a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state of the art results with 75.1 exact match accuracy and 83.1 F1, while the ensemble obtains 78.9 exact match accuracy and 86.0 F1.",
"Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN DailyMail cloze test."
]
} |
1901.02257 | 2949229857 | Commonsense Reading Comprehension (CRC) is a significantly challenging task, aiming at choosing the right answer for the question referring to a narrative passage, which may require commonsense knowledge inference. Most of the existing approaches only fuse the interaction information of choice, passage, and question in a simple combination manner from a perspective, which lacks the comparison information on a deeper level. Instead, we propose a Multi-Perspective Fusion Network (MPFN), extending the single fusion method with multiple perspectives by introducing the and fusion along with the . More comprehensive and accurate information can be captured through the three types of fusion. We design several groups of experiments on MCScript dataset Ostermann:LREC18:MCScript to evaluate the effectiveness of the three types of fusion respectively. From the experimental results, we can conclude that the difference fusion is comparable with union fusion, and the similarity fusion needs to be activated by the union fusion. The experimental result also shows that our MPFN model achieves the state-of-the-art with an accuracy of 83.52 on the official test set. | @cite_3 present a DFN model to fuse the passage, question, and choice by dynamically determine the attention strategy. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2768331480"
],
"abstract": [
"This paper presents a new MRC model that is capable of three key comprehension skills: 1) handling rich variations in question types; 2) understanding potential answer choices; and 3) drawing inference through multiple sentences. The model is based on the proposed MUlti-Strategy Inference for Comprehension (MUSIC) architecture, which is able to dynamically apply different attention strategies to different types of questions on the fly. By incorporating a multi-step inference engine analogous to ReasoNet (, 2017), MUSIC can also effectively perform multi-sentence inference in generating answers. Evaluation on the RACE dataset shows that the proposed method significantly outperforms previous state-of-the-art models by 7.5 in relative accuracy."
]
} |
1901.02222 | 2886416400 | Natural Language Inference (NLI) is a fundamental and challenging task in Natural Language Processing (NLP). Most existing methods only apply one-pass inference process on a mixed matching feature, which is a concatenation of different matching features between a premise and a hypothesis. In this paper, we propose a new model called Multi-turn Inference Matching Network (MIMN) to perform multi-turn inference on different matching features. In each turn, the model focuses on one particular matching feature instead of the mixed matching feature. To enhance the interaction between different matching features, a memory component is employed to store the history inference information. The inference of each turn is performed on the current matching feature and the memory. We conduct experiments on three different NLI datasets. The experimental results show that our model outperforms or achieves the state-of-the-art performance on all the three datasets. | Early work on the NLI task mainly uses conventional statistical methods on small-scale datasets @cite_29 @cite_18 . Recently, the neural models on NLI are based on large-scale datasets and can be categorized into two central frameworks: (i) Siamense-based framework which focuses on building sentence embeddings separately and integrates the two sentence representations to make the final prediction @cite_16 @cite_3 @cite_22 @cite_23 @cite_31 @cite_8 @cite_34 ; (ii) mat -ching-aggregation'' framework which uses various matching methods to get the interactive space of two input sentences and then aggregates the matching results to dig for deep information @cite_19 @cite_2 @cite_28 @cite_12 @cite_33 @cite_6 @cite_21 @cite_36 @cite_34 @cite_7 @cite_13 . | {
"cite_N": [
"@cite_22",
"@cite_36",
"@cite_29",
"@cite_3",
"@cite_2",
"@cite_18",
"@cite_8",
"@cite_21",
"@cite_23",
"@cite_7",
"@cite_28",
"@cite_6",
"@cite_19",
"@cite_34",
"@cite_16",
"@cite_12",
"@cite_33",
"@cite_31",
"@cite_13"
],
"mid": [
"2963973721",
"2962998327",
"2130158090",
"2415204069",
"2512531235",
"2250790822",
"2962685628",
"2740711318",
"2742774967",
"2963719234",
"2963731165",
"2593833795",
"2962958286",
"2782363479",
"2963241825",
"2576562514",
"2413794162",
"2754586843",
"2785375517"
],
"abstract": [
"",
"",
"This paper presents the Third PASCAL Recognising Textual Entailment Challenge (RTE-3), providing an overview of the dataset creating methodology and the submitted systems. In creating this year's dataset, a number of longer texts were introduced to make the challenge more oriented to realistic scenarios. Additionally, a pool of resources was offered so that the participants could share common tools. A pilot task was also set up, aimed at differentiating unknown entailments from identified contradictions and providing justifications for overall system decisions. 26 participants submitted 44 runs, using different approaches and generally presenting new entailment models and achieving higher scores than in the previous challenges.",
"In this paper, we proposed a sentence encoding-based model for recognizing text entailment. In our approach, the encoding of sentence is a two-stage process. Firstly, average pooling was used over word-level bidirectional LSTM (biLSTM) to generate a first-stage sentence representation. Secondly, attention mechanism was employed to replace average pooling on the same sentence for better representations. Instead of using target sentence to attend words in source sentence, we utilized the sentence's first-stage representation to attend words appeared in itself, which is called \"Inner-Attention\" in our paper . Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus has proved the effectiveness of \"Inner-Attention\" mechanism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin.",
"",
"Shared and internationally recognized benchmarks are fundamental for the development of any computational system. We aim to help the research community working on compositional distributional semantic models (CDSMs) by providing SICK (Sentences Involving Compositional Knowldedge), a large size English benchmark tailored for them. SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic and semantic phenomena that CDSMs are expected to account for, but do not require dealing with other aspects of existing sentential data sets (idiomatic multiword expressions, named entities, telegraphic language) that are not within the scope of CDSMs. By means of crowdsourcing techniques, each pair was annotated for two crucial semantic tasks: relatedness in meaning (with a 5-point rating scale as gold score) and entailment relation between the two elements (with three possible gold labels: entailment, contradiction, and neutral). The SICK data set was used in SemEval-2014 Task 1, and it freely available for research purposes.",
"",
"Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.",
"The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9 accuracy) and on the cross-domain test set (also attaining a 74.9 accuracy), demonstrating that the model generalizes well to the cross-domain data. Our model is equipped with intra-sentence gated-attention composition which helps achieve a better performance. In addition to submitting our model to the Shared Task, we have also tested it on the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy of 85.5 , which is the best reported result on SNLI when cross-sentence attention is not allowed, the same condition enforced in RepEval 2017.",
"Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It's noteworthy that DIIN achieve a greater than 20 error reduction on the challenging Multi-Genre NLI (MultiNLI) dataset with respect to the strongest published system.",
"",
"",
"Abstract: While most approaches to automatically recognizing entailment relations have used classifiers employing hand engineered features derived from complex natural language processing pipelines, in practice their performance has been only slightly better than bag-of-word pair classifiers using only lexical similarity. The only attempt so far to build an end-to-end differentiable neural network for entailment failed to outperform such a simple similarity classifier. In this paper, we propose a neural model that reads two sentences to determine entailment using long short-term memory units. We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases. Furthermore, we present a qualitative analysis of attention weights produced by this model, demonstrating such reasoning capabilities. On a large entailment dataset this model outperforms the previous best neural model and a classifier with engineered features by a substantial margin. It is the first generic end-to-end differentiable system that achieves state-of-the-art accuracy on a textual entailment dataset.",
"This paper presents a new deep learning architecture for Natural Language Inference (NLI). Firstly, we introduce a new compare-propagate architecture where alignments pairs are compared and then propagated to upper layers for enhanced representation learning. Secondly, we adopt novel factorization layers for efficient compression of alignment vectors into scalar valued features, which are then be used to augment the base word representations. The design of our approach is aimed to be conceptually simple, compact and yet powerful. We conduct experiments on three popular benchmarks, SNLI, MultiNLI and SciTail, achieving state-of-the-art performance on all. A lightweight parameterization of our model enjoys a @math reduction in parameter size compared to the ESIM and DIIN, while maintaining competitive performance. Visual analysis shows that our propagated features are highly interpretable, opening new avenues to explainability in neural NLI models.",
"",
"",
"",
"Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively. Attention mechanisms have recently attracted enormous interest due to their highly parallelizable computation, significantly less training time, and flexibility in modeling dependencies. We propose a novel attention mechanism in which the attention between elements from input sequence(s) is directional and multi-dimensional (i.e., feature-wise). A light-weight neural net, \"Directional Self-Attention Network (DiSAN)\", is then proposed to learn sentence embedding, based solely on the proposed attention without any RNN CNN structure. DiSAN is only composed of a directional self-attention with temporal order encoded, followed by a multi-dimensional attention that compresses the sequence into a vector representation. Despite its simple form, DiSAN outperforms complicated RNN models on both prediction quality and time efficiency. It achieves the best test accuracy among all sentence encoding methods and improves the most recent best result by 1.02 on the Stanford Natural Language Inference (SNLI) dataset, and shows state-of-the-art test accuracy on the Stanford Sentiment Treebank (SST), Multi-Genre natural language inference (MultiNLI), Sentences Involving Compositional Knowledge (SICK), Customer Review, MPQA, TREC question-type classification and Subjectivity (SUBJ) datasets.",
"We present a novel deep learning architecture to address the natural language inference (NLI) task. Existing approaches mostly rely on simple reading mechanisms for independent encoding of the premise and hypothesis. Instead, we propose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to efficiently model the relationship between a premise and a hypothesis during encoding and inference. We also introduce a sophisticated ensemble strategy to combine our proposed models, which noticeably improves final predictions. Finally, we demonstrate how the results can be improved further with an additional preprocessing step. Our evaluation shows that DR-BiLSTM obtains the best single model and ensemble model results achieving the new state-of-the-art scores on the Stanford NLI dataset."
]
} |
1901.02291 | 2911208170 | Recently, a number of works have studied clustering strategies that combine classical clustering algorithms and deep learning methods. These approaches follow either a sequential way, where a deep representation is learned using a deep autoencoder before obtaining clusters with k-means, or a simultaneous way, where deep representation and clusters are learned jointly by optimizing a single objective function. Both strategies improve clustering performance, however the robustness of these approaches is impeded by several deep autoencoder setting issues, among which the weights initialization, the width and number of layers or the number of epochs. To alleviate the impact of such hyperparameters setting on the clustering performance, we propose a new model which combines the spectral clustering and deep autoencoder strengths in an ensemble learning framework. Extensive experiments on various benchmark datasets demonstrate the potential and robustness of our approach compared to state-of-the art deep clustering methods. | Within this framework, classical dimensionality reduction approaches, e.g., Principal Component Analysis (PCA), have been widely considered for the embedding task. However, the linear nature of such techniques makes it challenging to infer faithful representations of real-world data, which typically lie on highly non-linear manifolds. This motivates the investigation of deep learning models (e.g., autoencoders, convolutional neural networks), which have been shown so far to be successful in extracting highly non-linear features from complex data, such as text, images or graphs @cite_16 @cite_44 @cite_12 . | {
"cite_N": [
"@cite_44",
"@cite_16",
"@cite_12"
],
"mid": [
"",
"2100495367",
"2134842679"
],
"abstract": [
"",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data-generating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty)."
]
} |
1901.02291 | 2911208170 | Recently, a number of works have studied clustering strategies that combine classical clustering algorithms and deep learning methods. These approaches follow either a sequential way, where a deep representation is learned using a deep autoencoder before obtaining clusters with k-means, or a simultaneous way, where deep representation and clusters are learned jointly by optimizing a single objective function. Both strategies improve clustering performance, however the robustness of these approaches is impeded by several deep autoencoder setting issues, among which the weights initialization, the width and number of layers or the number of epochs. To alleviate the impact of such hyperparameters setting on the clustering performance, we propose a new model which combines the spectral clustering and deep autoencoder strengths in an ensemble learning framework. Extensive experiments on various benchmark datasets demonstrate the potential and robustness of our approach compared to state-of-the art deep clustering methods. | The deep autoencoders (DAE) have proven to be useful for dimensionality reduction @cite_16 and image denoising. In particular, the autoencoders (AE) can non-linearly transform data into a latent space. When this latent space has lower dimension than the original one @cite_16 , this can be viewed as a form of non-linear PCA. An autoencoder typically consists of an encoder stage, that can provide an encoding of the original data in lower dimension, and a decoder part, to define the data reconstruction cost. In clustering context, the general idea is to embed the data into a low dimensional latent space and then perform clustering in this new space. The goal of the embedding here is to learn new representations of the objects of interest (e.g., images) that encode only the most relevant information characterizing the original data, which would for example reduce noise and sparsity. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2100495367"
],
"abstract": [
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
]
} |
1901.02291 | 2911208170 | Recently, a number of works have studied clustering strategies that combine classical clustering algorithms and deep learning methods. These approaches follow either a sequential way, where a deep representation is learned using a deep autoencoder before obtaining clusters with k-means, or a simultaneous way, where deep representation and clusters are learned jointly by optimizing a single objective function. Both strategies improve clustering performance, however the robustness of these approaches is impeded by several deep autoencoder setting issues, among which the weights initialization, the width and number of layers or the number of epochs. To alleviate the impact of such hyperparameters setting on the clustering performance, we propose a new model which combines the spectral clustering and deep autoencoder strengths in an ensemble learning framework. Extensive experiments on various benchmark datasets demonstrate the potential and robustness of our approach compared to state-of-the art deep clustering methods. | Since the embedding process is not guaranteed to infer representations that are suitable for the clustering task, several authors recommend to perform both tasks jointly so as to let clustering govern feature extraction and vice-versa. In @cite_1 , the authors propose a general framework, so-called DeepCluster , to integrate the traditional clustering methods into deep learning models and adopt Alternating Direction of Multiplier Method to optimize it. In @cite_35 , a joint dimensionality reduction and clustering approach in which dimensionality reduction is accomplished via learning a deep neural network is proposed. | {
"cite_N": [
"@cite_35",
"@cite_1"
],
"mid": [
"2533545350",
"2777922598"
],
"abstract": [
"Most learning approaches treat dimensionality reduction (DR) and clustering separately (i.e., sequentially), but recent research has shown that optimizing the two tasks jointly can substantially improve the performance of both. The premise behind the latter genre is that the data samples are obtained via linear transformation of latent representations that are easy to cluster; but in practice, the transformation from the latent space to the data can be more complicated. In this work, we assume that this transformation is an unknown and possibly nonlinear function. To recover the 'clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN). The motivation is to keep the advantages of jointly optimizing the two tasks, while exploiting the deep neural network's ability to approximate any nonlinear function. This way, the proposed approach can work well for a broad class of generative models. Towards this end, we carefully design the DNN structure and the associated joint optimization criterion, and propose an effective and scalable algorithm to handle the formulated optimization problem. Experiments using different real datasets are employed to showcase the effectiveness of the proposed approach.",
"In this paper, we propose a general framework DeepCluster to integrate traditional clustering methods into deep learning (DL) models and adopt Alternating Direction of Multiplier Method (ADMM) to optimize it. While most existing DL based clustering techniques have separate feature learning (via DL) and clustering (with traditional clustering methods), DeepCluster simultaneously learns feature representation and does cluster assignment under the same framework. Furthermore, it is a general and flexible framework that can employ different networks and clustering methods. We demonstrate the effectiveness of DeepCluster by integrating two popular clustering methods: K-means and Gaussian Mixture Model (GMM) into deep networks. The experimental results shown that our method can achieve state-of-the-art performance on learning representation for clustering analysis. Code and data related to this chapter are available at: https: github.com JennyQQL DeepClusterADMM-Release."
]
} |
1901.02425 | 2910681974 | Recent Salient Object Detection (SOD) systems are mostly based on Convolutional Neural Networks (CNNs). Specifically, Deeply Supervised Saliency (DSS) system has shown it is very useful to add short connections to the network and supervising on the side output. In this work, we propose a new SOD system which aims at designing a more efficient and effective way to pass back global information. Richer and Deeper Supervision (RDS) is applied to better combine features from each side output without demanding much extra computational space. Meanwhile, the backbone network used for SOD is normally pre-trained on the object classification dataset, ImageNet. But the pre-trained model has been trained on cropped images in order to only focus on distinguishing features within the region of the object. But the ignored background information is also significant in the task of SOD. We try to solve this problem by introducing the training data designed for object detection. A coarse global information is learned based on an entire image with its bounding box before training on the SOD dataset. The large-scale of object images can slightly improve the performance of SOD. Our experiment shows the proposed RDS network achieves the state-of-the-art results on five public SOD datasets. | Early SOD methods are mainly based on hand-crafted features to capture saliency information, e.g. global contrast @cite_38 , multiple color spaces @cite_25 , clustering on reconstruction error @cite_34 and objectness cues @cite_33 . But traditional methods can hardly outperform CNN-based because low-level features are not rich enough to encode saliency information. In this section, we mainly discuss CNN-based state-of-the-art methods because they are more related to our work. | {
"cite_N": [
"@cite_38",
"@cite_34",
"@cite_25",
"@cite_33"
],
"mid": [
"",
"2128340050",
"2020523103",
"2065372998"
],
"abstract": [
"",
"In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via super pixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.",
"In this paper, we introduce a novel technique to automatically detect salient regions of an image via high-dimensional color transform. Our main idea is to represent a saliency map of an image as a linear combination of high-dimensional color space where salient regions and backgrounds can be distinctively separated. This is based on an observation that salient regions often have distinctive colors compared to the background in human perception, but human perception is often complicated and highly nonlinear. By mapping a low dimensional RGB color to a feature vector in a high-dimensional color space, we show that we can linearly separate the salient regions from the background by finding an optimal linear combination of color coefficients in the high-dimensional color space. Our high dimensional color space incorporates multiple color representations including RGB, CIELab, HSV and with gamma corrections to enrich its representative power. Our experimental results on three benchmark datasets show that our technique is effective, and it is computationally efficient in comparison to previous state-of-the-art techniques.",
"It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics."
]
} |
1901.02425 | 2910681974 | Recent Salient Object Detection (SOD) systems are mostly based on Convolutional Neural Networks (CNNs). Specifically, Deeply Supervised Saliency (DSS) system has shown it is very useful to add short connections to the network and supervising on the side output. In this work, we propose a new SOD system which aims at designing a more efficient and effective way to pass back global information. Richer and Deeper Supervision (RDS) is applied to better combine features from each side output without demanding much extra computational space. Meanwhile, the backbone network used for SOD is normally pre-trained on the object classification dataset, ImageNet. But the pre-trained model has been trained on cropped images in order to only focus on distinguishing features within the region of the object. But the ignored background information is also significant in the task of SOD. We try to solve this problem by introducing the training data designed for object detection. A coarse global information is learned based on an entire image with its bounding box before training on the SOD dataset. The large-scale of object images can slightly improve the performance of SOD. Our experiment shows the proposed RDS network achieves the state-of-the-art results on five public SOD datasets. | One recently proposed method @cite_10 is closely related to ours because both methods exploit multiple side outputs and short connections. As shown in Figure , HED ) was proposed to detect contour, but no short connections are applied. The DSS network ) connects each layer to all the previous layers. But the large-size transposed convolution requires much space to compute, it becomes more problematic when passing more channels. While our design of RDS ) only concatenates neighbouring side outputs using transposed convolution with a small kernel to save space. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2569272946"
],
"abstract": [
"Recent progress on salient object detection is substantial, benefiting mostly from the explosive development of Convolutional Neural Networks (CNNs). Semantic segmentation and salient object detection algorithms developed lately have been mostly based on Fully Convolutional Neural Networks (FCNs). There is still a large room for improvement over the generic FCN models that do not explicitly deal with the scale-space problem. The Holistically-Nested Edge Detector (HED) provides a skip-layer structure with deep supervision for edge and boundary detection, but the performance gain of HED on saliency detection is not obvious. In this paper, we propose a new salient object detection method by introducing short connections to the skip-layer structures within the HED architecture. Our framework takes full advantage of multi-level and multi-scale features extracted from FCNs, providing more advanced representations at each layer, a property that is critically needed to perform segment detection. Our method produces state-of-the-art results on 5 widely tested salient object detection benchmarks, with advantages in terms of efficiency (0.08 seconds per image), effectiveness, and simplicity over the existing algorithms. Beyond that, we conduct an exhaustive analysis of the role of training data on performance. We provide a training set for future research and fair comparisons."
]
} |
1901.02425 | 2910681974 | Recent Salient Object Detection (SOD) systems are mostly based on Convolutional Neural Networks (CNNs). Specifically, Deeply Supervised Saliency (DSS) system has shown it is very useful to add short connections to the network and supervising on the side output. In this work, we propose a new SOD system which aims at designing a more efficient and effective way to pass back global information. Richer and Deeper Supervision (RDS) is applied to better combine features from each side output without demanding much extra computational space. Meanwhile, the backbone network used for SOD is normally pre-trained on the object classification dataset, ImageNet. But the pre-trained model has been trained on cropped images in order to only focus on distinguishing features within the region of the object. But the ignored background information is also significant in the task of SOD. We try to solve this problem by introducing the training data designed for object detection. A coarse global information is learned based on an entire image with its bounding box before training on the SOD dataset. The large-scale of object images can slightly improve the performance of SOD. Our experiment shows the proposed RDS network achieves the state-of-the-art results on five public SOD datasets. | Another related work is @cite_33 because we both utilize the objectness cue to predict salient regions without using category information. Their method @cite_33 used various hand-crafted features to propose box candidates. Pixel-level scores are computed by applying the Gaussian process. Our method is CNN-based and we try to simply convert the task of object detection into SOD. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2065372998"
],
"abstract": [
"It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics."
]
} |
1901.02352 | 2910295028 | With the development of multimedia era, multi-view data is generated in various fields. Contrast with those single-view data, multi-view data brings more useful information and should be carefully excavated. Therefore, it is essential to fully exploit the complementary information embedded in multiple views to enhance the performances of many tasks. Especially for those high-dimensional data, how to develop a multi-view dimension reduction algorithm to obtain the low-dimensional representations is of vital importance but chanllenging. In this paper, we propose a novel multi-view dimensional reduction algorithm named Auto-weighted Mutli-view Sparse Reconstructive Embedding (AMSRE) to deal with this problem. AMSRE fully exploits the sparse reconstructive correlations between features from multiple views. Furthermore, it is equipped with an auto-weighted technique to treat multiple views discriminatively according to their contributions. Various experiments have verified the excellent performances of the proposed AMSRE. | MSE is a good performance for multi-view dimension reduction. It can encode different features from multiple views to achieve a physically meaningful embedding. Xia el al. @cite_18 extends Laplacian Eigenmaps (LE) @cite_5 into multi-view mode and develops an architecture to learn weights for different views according to their contributions. Furthermore, MSE integrates laplacian graphs from multiple views via global coordinate alignment. And the proposed objective function of MSE can be summarized as follows: | {
"cite_N": [
"@cite_5",
"@cite_18"
],
"mid": [
"2156718197",
"2166782149"
],
"abstract": [
"Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.",
"In computer vision and multimedia search, it is common to use multiple features from different views to represent an object. For example, to well characterize a natural scene image, it is essential to find a set of visual features to represent its color, texture, and shape information and encode each feature into a vector. Therefore, we have a set of vectors in different spaces to represent the image. Conventional spectral-embedding algorithms cannot deal with such datum directly, so we have to concatenate these vectors together as a new vector. This concatenation is not physically meaningful because each feature has a specific statistical property. Therefore, we develop a new spectral-embedding algorithm, namely, multiview spectral embedding (MSE), which can encode different features in different ways, to achieve a physically meaningful embedding. In particular, MSE finds a low-dimensional embedding wherein the distribution of each view is sufficiently smooth, and MSE explores the complementary property of different views. Because there is no closed-form solution for MSE, we derive an alternating optimization-based iterative algorithm to obtain the low-dimensional embedding. Empirical evaluations based on the applications of image retrieval, video annotation, and document clustering demonstrate the effectiveness of the proposed approach."
]
} |
1901.02184 | 2902503307 | In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. We use our method to explain the gaming strategy of the alphaGo Zero model. Unlike previous studies that visualized image appearances corresponding to the network output or a neural activation only from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. Explaining the logic of the alphaGo Zero model is a typical application. In experiments, our method successfully disentangled the rationale of each move during the Go game. | Visualization of filters in intermediate layers is the most direct method to analyze the knowledge hidden inside a neural network. @cite_8 @cite_24 @cite_16 @cite_28 @cite_7 @cite_35 @cite_0 showed the appearance that maximized the score of a given unit. @cite_28 used up-convolutional nets to invert CNN feature maps to their corresponding images. | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_24",
"@cite_0",
"@cite_16"
],
"mid": [
"2745742138",
"1825675169",
"1849277567",
"2963989815",
"1915485278",
"2503388974",
"2169393322"
],
"abstract": [
"Sometimes it is not enough for a DNN to produce an outcome. For example, in applications such as healthcare, users need to understand the rationale of the decisions. Therefore, it is imperative to develop algorithms to learn models with good interpretability (Doshi-Velez 2017). An important factor that leads to the lack of interpretability of DNNs is the ambiguity of neurons, where a neuron may fire for various unrelated concepts. This work aims to increase the interpretability of DNNs on the whole image space by reducing the ambiguity of neurons. In this paper, we make the following contributions: 1) We propose a metric to evaluate the consistency level of neurons in a network quantitatively. 2) We find that the learned features of neurons are ambiguous by leveraging adversarial examples. 3) We propose to improve the consistency of neurons on adversarial example subset by an adversarial training algorithm with a consistent loss.",
"Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.",
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [, 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [, 2013]."
]
} |
1901.02184 | 2902503307 | In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. We use our method to explain the gaming strategy of the alphaGo Zero model. Unlike previous studies that visualized image appearances corresponding to the network output or a neural activation only from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. Explaining the logic of the alphaGo Zero model is a typical application. In experiments, our method successfully disentangled the rationale of each move during the Go game. | Some studies retrieved certain units from intermediate layers of CNNs that were related to certain semantics, although the relationship between a certain semantics and each neural unit was usually convincing enough. People usually parallel the retrieved units similar to conventional mid-level features @cite_5 of images. @cite_3 @cite_38 selected units from feature maps to describe scenes''. @cite_19 discovered objects from feature maps. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_3",
"@cite_38"
],
"mid": [
"1496650988",
"1590510366",
"2963996492",
"2295107390"
],
"abstract": [
"Part models of object categories are essential for challenging recognition tasks, where differences in categories are subtle and only reflected in appearances of small parts of the object. We present an approach that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning. The key idea is to find constellations of neural activation patterns computed using convolutional neural networks. In our experiments, we outperform existing approaches for fine-grained recognition on the CUB200-2011, Oxford PETS, and Oxford Flowers dataset in case no part or bounding box annotations are available and achieve state-of-the-art performance for the Stanford Dog dataset. We also show the benefits of neural constellation models as a data augmentation technique for fine-tuning. Furthermore, our paper unites the areas of generic and fine-grained classification, since our approach is suitable for both scenarios.",
"The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.",
"Abstract: With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1."
]
} |
1901.02184 | 2902503307 | In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. We use our method to explain the gaming strategy of the alphaGo Zero model. Unlike previous studies that visualized image appearances corresponding to the network output or a neural activation only from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. Explaining the logic of the alphaGo Zero model is a typical application. In experiments, our method successfully disentangled the rationale of each move during the Go game. | Model-diagnosis methods, such as the LIME @cite_10 , the SHAP @cite_14 , influence functions @cite_27 , gradient-based visualization methods @cite_25 @cite_18 , and @cite_20 extracted image regions that were responsible for network outputs. @cite_22 @cite_36 distilled knowledge from a pre-trained neural network into explainable models to interpret the logic of the target network. Such distillation-based network explanation is related to the first method proposed in this paper. However, unlike previous studies distilling knowledge into explicit visual concepts, our using distillation to disentangle local contextual effects has not been explored in previous studies. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_36",
"@cite_27",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"2962858109",
"2962862931",
"2806874342",
"2804690703",
"2597603852",
"2282821441",
"2606462007",
"2613094910"
],
"abstract": [
"We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: github.com ramprs grad-cam along with a demo on CloudCV [2] and video at youtu.be COjUB9Izk6E.",
"Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and or better consistency with human intuition than previous approaches.",
"Machine Learning algorithms are increasingly being used in recent years due to their flexibility in model fitting and increased predictive performance. However, the complexity of the models makes them hard for the data analyst to interpret the results and explain them without additional tools. This has led to much research in developing various approaches to understand the model behavior. In this paper, we present the Explainable Neural Network (xNN), a structured neural network designed especially to learn interpretable features. Unlike fully connected neural networks, the features engineered by the xNN can be extracted from the network in a relatively straightforward manner and the results displayed. With appropriate regularization, the xNN provides a parsimonious explanation of the relationship between the features and the output. We illustrate this interpretable feature--engineering property on simulated examples.",
"This paper presents an unsupervised method to learn a neural network, namely an explainer, to interpret a pre-trained convolutional neural network (CNN), i.e., explaining knowledge representations hidden in middle conv-layers of the CNN. Given feature maps of a certain conv-layer of the CNN, the explainer performs like an auto-encoder, which first disentangles the feature maps into object-part features and then inverts object-part features back to features of higher conv-layers of the CNN. More specifically, the explainer contains interpretable conv-layers, where each filter disentangles the representation of a specific object part from chaotic input feature maps. As a paraphrase of CNN features, the disentangled representations of object parts help people understand the logic inside the CNN. We also learn the explainer to use object-part features to reconstruct features of higher CNN layers, in order to minimize loss of information during the feature disentanglement. More crucially, we learn the explainer via network distillation without using any annotations of sample labels, object parts, or textures for supervision. We have applied our method to different types of CNNs for evaluation, and explainers have significantly boosted the interpretability of CNN features.",
"How can we explain the predictions of a black-box model? In this paper, we use influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.",
"Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks \"look\" in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we specialise the framework to find the part of an image most responsible for a classifier decision. Unlike previous works, our method is model-agnostic and testable because it is grounded in explicit and interpretable image perturbations.",
"In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an approach to visualize and understand the decisions made by deep neural networks (DNNs) given a specific input. CLEAR facilitates the visualization of attentive regions and levels of interest of DNNs during the decision-making process. It also enables the visualization of the most dominant classes associated with these attentive regions of interest. As such, CLEAR can mitigate some of the shortcomings of heatmap-based methods associated with decision ambiguity, and allows for better insights into the decision-making process of DNNs. Quantitative and qualitative experiments across three different datasets demonstrate the efficacy of CLEAR for gaining a better understanding of the inner workings of DNNs during the decision-making process."
]
} |
1901.02184 | 2902503307 | In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. We use our method to explain the gaming strategy of the alphaGo Zero model. Unlike previous studies that visualized image appearances corresponding to the network output or a neural activation only from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. Explaining the logic of the alphaGo Zero model is a typical application. In experiments, our method successfully disentangled the rationale of each move during the Go game. | A new trend is to learn networks with meaningful feature representations in intermediate layers @cite_30 @cite_4 @cite_21 in a weakly-supervised or unsupervised manner. For example, capsule nets @cite_32 and interpretable RCNN @cite_11 learned interpretable middle-layer features. InfoGAN @cite_2 and @math -VAE @cite_17 learned meaningful input codes of generative networks. @cite_12 developed a loss to push each middle-layer filter towards the representation of a specific object part during the learning process without given part annotations. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_21",
"@cite_32",
"@cite_17",
"@cite_2",
"@cite_12",
"@cite_11"
],
"mid": [
"2311110368",
"2964222561",
"2560977758",
"2963703618",
"2753738274",
"2963226019",
"2764024122",
""
],
"abstract": [
"Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. We deploy the framework on a CNN for sentiment analysis, and an RNN for named entity recognition. With a few highly intuitive rules, we obtain substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems.",
"Convolutional neural networks (CNNs) have shown great success in computer vision, approaching human-level performance when trained for specific tasks via application-specific loss functions. In this paper, we propose a method for augmenting and training CNNs so that their learned features are compositional. It encourages networks to form representations that disentangle objects from their surroundings and from each other, thereby promoting better generalization. Our method is agnostic to the specific details of the underlying CNN to which it is applied and can in principle be used with any CNN. As we show in our experiments, the learned representations lead to feature activations that are more localized and improve performance over non-compositional baselines in object recognition tasks.",
"In this paper we aim at facilitating generalization for deep networks while supporting interpretability of the learned representations. Towards this goal, we propose a clustering based regularization that encourages parsimonious representations. Our k-means style objective is easy to optimize and flexible supporting various forms of clustering, including sample and spatial clustering as well as co-clustering. We demonstrate the effectiveness of our approach on the tasks of unsupervised learning, classification, fine grained categorization and zero-shot learning.",
"A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.",
"Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.",
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657.",
"This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https: github.com zqs1022 interpretableCNN.",
""
]
} |
1901.01774 | 2907001860 | Accurate house prediction is of great significance to various real estate stakeholders such as house owners, buyers, investors, and agents. We propose a location-centered prediction framework that differs from existing work in terms of data profiling and prediction model. Regarding data profiling, we define and capture a fine-grained location profile powered by a diverse range of location data sources, such as transportation profile (e.g., distance to nearest train station), education profile (e.g., school zones and ranking), suburb profile based on census data, facility profile (e.g., nearby hospitals, supermarkets). Regarding the choice of prediction model, we observe that a variety of approaches either consider the entire house data for modeling, or split the entire data and model each partition independently. However, such modeling ignores the relatedness between partitions, and for all prediction scenarios, there may not be sufficient training samples per partition for the latter approach. We address this problem by conducting a careful study of exploiting the Multi-Task Learning (MTL) model. Specifically, we map the strategies for splitting the entire house data to the ways the tasks are defined in MTL, and each partition obtained is aligned with a task. Furthermore, we select specific MTL-based methods with different regularization terms to capture and exploit the relatedness between tasks. Based on real-world house transaction data collected in Melbourne, Australia. We design extensive experimental evaluations, and the results indicate a significant superiority of MTL-based methods over state-of-the-art approaches. Meanwhile, we conduct an in-depth analysis on the impact of task definitions and method selections in MTL on the prediction performance, and demonstrate that the impact of task definitions on prediction performance far exceeds that of method selections. | House is usually treated as a heterogeneous goods, defined by a bundle of utility bearing features @cite_33 @cite_21 . Therefore, the house price can be considered as a quantitative representation of a set of these features. Over the past decades, a large amount of studies have examined the relationship between house price and house features. For example, Kr 'o l @cite_32 investigated the relationship between the price of an apartment and its significant features based on the results of hedonic analysis in Poland. The work of @cite_51 discussed which house features have negative or positive effects upon the value of the house in Turkey. Kryvobokov and Wilhelmsson @cite_40 derived the weights of the relative importance of location features that influence the market values of apartments in Donetsk, Ukraine. @cite_14 compared measures of location using both distances and travel time, to the CBD, and to multiple employment centers to understand how residence location relative to employment location affects house price in Indianapolis, Indiana, USA. Ozalp and Akinci @cite_19 determined the housing and environmental features that were effective on residential real estate sale prices in Artvin, Turkey. | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_21",
"@cite_32",
"@cite_19",
"@cite_40",
"@cite_51"
],
"mid": [
"216611364",
"2018120288",
"",
"2398942906",
"2775096275",
"2137026670",
"2172367695"
],
"abstract": [
"A measure of location relative to employment is often included in hedonic housing price models. This is most often distance to the center, based on the monocentric model, which does not consider the decentralization of employment in urban areas. This paper tests the perfor-mance of alternative measures of location, considering both distance and time to the center and to multiple employment centers and measures of accessibility to employment and change in accessibility. The measures using multiple employment centers and accessibility perform better than simple distance to the center, with the combination of accessibility to employment and change in accessibility doing best.",
"The hedonic-based regression approach has been utilised extensively to investigate th relationship between house prices and housing characteristics. However, this approach is subject t criticisms arising from potential problems relating to fundamental model assumptions an estimation such as the identification of supply and demand, market disequilibrium, the selectio of independent variables, the choice of functional form of hedonic equation and marke segmentation. This study introduces and utilises an alternative approach-the decision tre approach, which is an important statistical pattern recognition tool. Using the Singapore resal public housing market as a case study, the article demonstrates the usefulness of this techniqu in examining the relationship between house prices and housing characteristics, identifying th significant determinants of housing prices and predicting housing prices. The built tree show that homebuyers are more concerned about the basic housing characteristics of two- and three room flats or four-room flats such as floor area, model type and flat age. However, homebuyer of five-room flats pay more attention to floor level in addition to the basic housin characteristics. In addition, homebuyers of executive apartments are less concerned about basi quantitative characteristics and have higher housing consumption expectations and pay mor attention to 'quality' and service characteristics such as recreational facilities and the livin environment.",
"",
"This paper concentrates on empirical and methodological issues of the application of econometric methods to modelling real estate market. The presented hedonic analysis of apartments’ prices in Wroclaw is based on the dataset consisting of over ten thousands offers from the secondary real estate market. The models estimated as the result of the research allow for pricing the apartments, as well as its characteristics.The foundations of hedonic methods are formed by the so-called hedonic hypothesis which states that heterogeneous commodities are characterized by a set of attributes relevant both from the point of view of the customer and the producer. As a consequence, the price of a commodity is determined as an aggregate of values estimated for each significant characteristic of this commodity. The hedonic model allows to price the commodity as well as to identify and estimate the prices of respective attributes, including the prices which are not directly observable on the market. The latter is particularly useful for the real estate market as it enables pricing location-related, neighbourhood-related and structure-related characteristics of housing whose values cannot be obtained otherwise.",
"Owning residential real estate is very important not only for satisfying the need for shelter—a fundamental human right—but also it is considered as a good investment. It is true that real estates are heterogeneous because of their various structural, positional characteristics, as well as their local conditions, all of which affect the value of real estates. These are among the reasons for undertaking studies to determine the degree of importance and the parameters affecting the value of residential real estate. By using one of the widely used methods, hedonic pricing method (HPM), this study seeks to determine the structural and environmental characteristics that are effective on residential real estate sale prices in Artvin city center. With the highest number of residential real estate sales in 2015, the Orta neighborhood, in the Central District of Artvin, was chosen as the study area. The relationship between 18 parameters were analyzed, on the basis of the actual sales value, acquired from the purchasers of 81 residential properties in this neighborhood and taking into account the structure of Artvin city. In this context, using the functional form of HPM, semi-logarithmic model was generated. When examining the model, it was determined that four out of the supposedly effective 18 parameters were statistically significant and they were able to explain 84 of the variations on price. Moreover, the parameters of floor area, age, distance to primary school, and distance to city center were effective on the sales prices in the semi-logarithmic model. In addition, the results revealed that the structural characteristics are more influential on real estate prices than the environmental and accessibility features within Orta neighborhood in Artvin.",
"A hedonic model is specified for asking prices for apartments in Donetsk (Ukraine). This model is used to determine statistically significant location attributes. These attributes can be used for land assessment in a city where data on the land market are lacking. Distance gradients for CBD accessibility are investigated in different geographical directions. Separate models are created for sub‐samples located inside and outside the city centre. A spatial weight matrix is used to detect spatial autocorrelation. The regression results are compared with the valuation of experts.",
"In this study, there has been aimed to determine the factors that affect the price of flats in the housing sector in Turkey with a hedonic pricing model. According to the model results, the house’s having residential swimming pool, a jacuzzi and a water tank, its being a duplex, its central heating system, its being closer to the center, the size of the house, the bathroom floor’s being vinyl or PVC, being closer to banking services and compulsory education services, its having cable TV, telephone lines and parking opportunities increases the value of the house. The house is in the basement or ground floor of the building construction date is first, the room is the floor of ceramic tiles, bathroom floor screed (concrete road) the fuel used is coal, reduces the value of the house."
]
} |
1901.01774 | 2907001860 | Accurate house prediction is of great significance to various real estate stakeholders such as house owners, buyers, investors, and agents. We propose a location-centered prediction framework that differs from existing work in terms of data profiling and prediction model. Regarding data profiling, we define and capture a fine-grained location profile powered by a diverse range of location data sources, such as transportation profile (e.g., distance to nearest train station), education profile (e.g., school zones and ranking), suburb profile based on census data, facility profile (e.g., nearby hospitals, supermarkets). Regarding the choice of prediction model, we observe that a variety of approaches either consider the entire house data for modeling, or split the entire data and model each partition independently. However, such modeling ignores the relatedness between partitions, and for all prediction scenarios, there may not be sufficient training samples per partition for the latter approach. We address this problem by conducting a careful study of exploiting the Multi-Task Learning (MTL) model. Specifically, we map the strategies for splitting the entire house data to the ways the tasks are defined in MTL, and each partition obtained is aligned with a task. Furthermore, we select specific MTL-based methods with different regularization terms to capture and exploit the relatedness between tasks. Based on real-world house transaction data collected in Melbourne, Australia. We design extensive experimental evaluations, and the results indicate a significant superiority of MTL-based methods over state-of-the-art approaches. Meanwhile, we conduct an in-depth analysis on the impact of task definitions and method selections in MTL on the prediction performance, and demonstrate that the impact of task definitions on prediction performance far exceeds that of method selections. | Recent studies have focused on house price prediction from local views and have gradually become a serious alternative and extension of conventional house price modeling approaches. Among these studies, @cite_17 compared alternative methods for taking spatial dependence into account in house price prediction, and concluded that a geostatistical model with disaggregated submarket variables performed the best. @cite_16 investigated the hedonic model and three spatial models, and out-of-sample prediction accuracy was used for comparison purposes. Their prediction results indicated the importance to incorporate the nearest neighbor transactions for predicting housing values. Gerek @cite_34 designed two different adaptive neuro-fuzzy approaches for prediction, namely ANFIS with grid partition (ANFIS-GP) and ANFIS with sub clustering (ANFIS-SC), and the results indicated that the performance of ANFIS-GP was slightly better than that of ANFIS-SC. @cite_11 considered parametric and semi-parametric spatial hedonic model variants to reflect the spatial effects in house price. The proposed model is represented as a mixed model that account for spatial autocorrelation, spatial heterogeneity and (smooth and nonparametrically specified) nonlinearities using penalized splines methodology. The results obtained suggest that the nonlinear models are the best strategies for house price prediction. | {
"cite_N": [
"@cite_11",
"@cite_16",
"@cite_34",
"@cite_17"
],
"mid": [
"2743729757",
"2030068896",
"2025117385",
"1589097074"
],
"abstract": [
"House price prediction is a hot topic in the economic literature. House price prediction has traditionally been approached using a-spatial linear (or intrinsically linear) hedonic models. It has been shown, however, that spatial effects are inherent in house pricing. This article considers parametric and semi-parametric spatial hedonic model variants that account for spatial autocorrelation, spatial heterogeneity and (smooth and nonparametrically specified) nonlinearities using penalized splines methodology. The models are represented as a mixed model that allow for the estimation of the smoothing parameters along with the other parameters of the model. To assess the out-of-sample performance of the models, the paper uses a database containing the price and characteristics of 10,512 homes in Madrid, Spain (Q1 2010). The results obtained suggest that the nonlinear models accounting for spatial heterogeneity and flexible nonlinear relationships between some of the individual or areal characteristics of the houses and their prices are the best strategies for house price prediction.",
"This research reports results from a competition on modeling spatial and temporal components of house prices. A large, well-documented database was prepared and made available to anyone wishing to join the competition. To prevent data snooping, out-of-sample observations were withheld; they were deposited with one individual who did not enter the competition, but had the responsibility of calculating out-of-sample statistics for results submitted by the others. The competition turned into a cooperative effort, resulting in enhancements to previous methods including: a localized version of Dubin's kriging model, a kriging version of Clapp's local regression model, and a local application of Case's earlier work on dividing a geographic housing market into districts. The results indicate the importance of nearest neighbor transactions for out-of-sample predictions: spatial trend analysis and census tract variables do not perform nearly as well as neighboring residuals.",
"Abstract It is of high importance to estimate the house selling prices because sellers and buyers need this information in an accurate way at time of sale. As a result of the fact that the estimation of the house prices is extorsive considerable research has been published in the literature on estimating house prices. This study covers the application of two different adaptive neuro-fuzzy (ANFIS) approaches for the estimation of house selling price. In this study, two different ANFIS models, namely ANFIS with grid partition (ANFIS-GP) and ANFIS with sub clustering (ANFIS-SC), were used in forecasting house prices, and the results were evaluated. Comparison of results from the two techniques indicated that the ANFIS-GP models performed better than the ANFIS-SC models. The study suggests that the ANFIS-GP technique can be successfully used in the estimation of house prices in the construction sector.",
"This paper compares alternative methods for taking spatial dependence into account in house price prediction. We select hedonic methods that have been reported in the literature to perform relatively well in terms of ex-sample prediction accuracy. Because differences in performance may be due to differences in data, we compare the methods using a single data set. The estimation methods include simple OLS, a two-stage process incorporating nearest neighbors’ residuals in the second stage, geostatistical, and trend surface models. These models take into account submarkets by adding dummy variables or by estimating separate equations for each submarket. Based on data for approximately 13,000 transactions from Louisville, Kentucky, we conclude that a geostatistical model with disaggregated submarket variables performs best."
]
} |
1901.01994 | 2909857857 | Central Pattern Generators (CPGs) are biological neural circuits capable of producing coordinated rhythmic outputs in the absence of rhythmic input. As a result, they are responsible for most rhythmic motion in living organisms. This rhythmic control is broadly applicable to fields such as locomotive robotics and medical devices. In this paper, we explore the possibility of creating a self-sustaining CPG network for reinforcement learning that learns rhythmic motion more efficiently and across more general environments than the current multilayer perceptron (MLP) baseline models. Recent work introduces the Structured Control Net (SCN), which maintains linear and nonlinear modules for local and global control, respectively. Here, we show that time-sequence architectures such as Recurrent Neural Networks (RNNs) model CPGs effectively. Combining previous work with RNNs and SCNs, we introduce the Recurrent Control Net (RCN), which adds a linear component to the, RCNs match and exceed the performance of baseline MLPs and SCNs across all environment tasks. Our findings confirm existing intuitions for RNNs on reinforcement learning tasks, and demonstrate promise of SCN-like structures in reinforcement learning. | @cite_19 provides evidence that locomotion specific structures than general purpose MLPs can provide motion intuitions to RL environments. Brielfy, the SCN architecture is to separately learn local and global control. These are specific to locomotion tasks: the agent needs to learn global interactions and general patterns of movements specific to a task. @cite_19 demonstrates improvements only using a MLP nonlinear module. These SCN results follow network intuitions, that since locomotive actions are also heavily dependent on immediate prior actions learning local interactions gives local context necessary to generate subsequent actions. Still, the SCN learns cyclic actions very weakly, as it produces outputs from current observations only. The Recurrent Control Net instead provides deeper context from previous states that better provides context for these locomotive RL policies, allowing the possibility of cyclic policy functions. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2787966607"
],
"abstract": [
"In recent years, Deep Reinforcement Learning has made impressive advances in solving several important benchmark problems for sequential decision making. Many control applications use a generic multilayer perceptron (MLP) for non-vision parts of the policy network. In this work, we propose a new neural network architecture for the policy network representation that is simple yet effective. The proposed Structured Control Net (SCN) splits the generic MLP into two separate sub-modules: a nonlinear control module and a linear control module. Intuitively, the nonlinear control is for forward-looking and global control, while the linear control stabilizes the local dynamics around the residual of global control. We hypothesize that this will bring together the benefits of both linear and nonlinear policies: improve training sample efficiency, final episodic reward, and generalization of learned policy, while requiring a smaller network and being generally applicable to different training methods. We validated our hypothesis with competitive results on simulations from OpenAI MuJoCo, Roboschool, Atari, and a custom 2D urban driving environment, with various ablation and generalization tests, trained with multiple black-box and policy gradient training methods. The proposed architecture has the potential to improve upon broader control tasks by incorporating problem specific priors into the architecture. As a case study, we demonstrate much improved performance for locomotion tasks by emulating the biological central pattern generators (CPGs) as the nonlinear part of the architecture."
]
} |
1901.01994 | 2909857857 | Central Pattern Generators (CPGs) are biological neural circuits capable of producing coordinated rhythmic outputs in the absence of rhythmic input. As a result, they are responsible for most rhythmic motion in living organisms. This rhythmic control is broadly applicable to fields such as locomotive robotics and medical devices. In this paper, we explore the possibility of creating a self-sustaining CPG network for reinforcement learning that learns rhythmic motion more efficiently and across more general environments than the current multilayer perceptron (MLP) baseline models. Recent work introduces the Structured Control Net (SCN), which maintains linear and nonlinear modules for local and global control, respectively. Here, we show that time-sequence architectures such as Recurrent Neural Networks (RNNs) model CPGs effectively. Combining previous work with RNNs and SCNs, we introduce the Recurrent Control Net (RCN), which adds a linear component to the, RCNs match and exceed the performance of baseline MLPs and SCNs across all environment tasks. Our findings confirm existing intuitions for RNNs on reinforcement learning tasks, and demonstrate promise of SCN-like structures in reinforcement learning. | RNNs model time-sequences well by maintaining a hidden state as a function of priors @cite_14 . Previously, RNNs have been intensively used in natural language processing because sequence prediction is best modeled by including previous context. Therefore, to leverage the added context, RNNs has also been explored loosely in reinforcement learning for quadruped environments. However, RNNs have not been generalized to general locomotive tasks and still remain relatively specific to these quadruped based locomotion tasks @cite_18 @cite_13 . Instead of RNNs as policy networks, inclusion of linear-global dynamics by adopting simple RNN as a nonlinearity in SCNs have the potential to exceed existing SCN MLP Baselines (See Appendix for a comprehensive description of effective RNN architectures). | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_13"
],
"mid": [
"2154269964",
"1598796236",
""
],
"abstract": [
"Aiming to real easy application in several quadruped robot platforms, this paper introduces a new method of modeling central pattern generators (CPG) to control quadruped locomotion. Not only can this new model generate all the primary gaits of quadrupeds stably with limit cycle effect, but it also has the ability of tuning the periodic outputs with arbitrary waveforms. The core idea is to combine strong points of two mathematical tools: Fourier series and Recurrent neural networks. In addition, a new biomimetic controller is also introduced using the proposed CPG model and several reflex modules. Finally, dynamic simulations are performed to validate the efficiency of the proposed controller.",
"Countless learning tasks require dealing with sequential data. Image captioning, speech synthesis, and music generation all require that a model produce outputs that are sequences. In other domains, such as time series prediction, video analysis, and musical information retrieval, a model must learn from inputs that are sequences. Interactive tasks, such as translating natural language, engaging in dialogue, and controlling a robot, often demand both capabilities. Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes. Unlike standard feedforward neural networks, recurrent networks retain a state that can represent information from an arbitrarily long context window. Although recurrent neural networks have traditionally been dicult to train, and often contain millions of parameters, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful large-scale learning with them. In recent years, systems based on long short-term memory (LSTM) and bidirectional (BRNN) architectures have demonstrated ground-breaking performance on tasks as varied as image captioning, language translation, and handwriting recognition. In this survey, we review and synthesize the research that over the past three decades rst yielded and then made practical these powerful learning models. When appropriate, we reconcile conicting notation and nomenclature. Our goal is to provide a selfcontained explication of the state of the art together with a historical perspective and references to primary research.",
""
]
} |
1901.01978 | 2909236140 | While the classic Vickrey-Clarke-Groves mechanism ensures incentive compatibility for a static one-shot game, it does not appear to be feasible to construct a dominant truth-telling mechanism for agents that are stochastic dynamic systems. The contribution of this paper is to show that for a set of LQG agents a mechanism consisting of a sequence of layered payments over time decouples the intertemporal effect of current bids on future payoffs and ensures truth-telling of dynamic states by their agents, if system parameters are known and agents are rational. Additionally, it is shown that there is a "Scaled" VCG mechanism that simultaneously satisfies incentive compatibility, social efficiency, budget balance as well as individual rationality under a certain "market power balance" condition where no agent is too negligible or too dominant. A further desirable property is that the SVCG payments converge to the Lagrange payments, the payments corresponding to the true price in the absence of strategic considerations, as the number of agents in the market increases. For LQ but non-Gaussian agents the optimal social welfare over the class of linear control laws is achieved. | There are also some related works aiming at achieving budget balance for VCG mechanism. @cite_18 discusses the trade-off between budget balance and efficiency of the mechanism. Cavallo @cite_16 uses domain information regarding agent valuation spaces to achieve redistribution of much of the required transfer payments back among the agents. Similarly, @cite_6 propose a mechanism that is efficient and comes close to budget balance by returning much of the payments back to the agents in the form of rebates. In @cite_20 , an enhanced (Arrow-d’Aspremont-Gerard-Varet) AGV mechanism is proposed to tackle the problem of budget balance in demand side management. | {
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_20",
"@cite_6"
],
"mid": [
"1993934254",
"2056062533",
"1996299923",
"2541696016"
],
"abstract": [
"A service is produced for a set of agents. The service is binary, each agent either receives service or not, and the total cost of service is a submodular function of the set receiving service. We investigate strategyproof mechanisms that elicit individual willingness to pay, decide who is served, and then share the cost among them. If such a mechanism is budget balanced (covers cost exactly), it cannot be efficient (serve the surplus maximizing set of users) and vice-versa. We characterize the rich family of budget balanced and group strategyproof mechanisms and find that the mechanism associated with the Shapley value cost sharing formula is characterized by the property that its worst welfare loss is minimal. When we require efficiency rather than budget balance – the more common route in the literature – we find that there is a single Clarke-Groves mechanism that satisfies certain reasonable conditions: we call this the marginal cost pricing mechanism. We compare the size of the marginal cost pricing mechanism's worst budget surplus with the worst welfare loss of the Shapley value mechanism.",
"Mechanisms for coordinating group decision-making among self-interested agents often employ a trusted center, capable of enforcing the prescribed outcome. Typically such mechanisms, including the ubiquitous Vickrey Clarke Groves (VCG), require significant transfer payments from agents to the center. While this is sought after in some settings, it is often an unwanted cost of implementation. We propose a modification of the VCG framework that---by using domain information regarding agent valuation spaces---is often able to achieve redistribution of much of the required transfer payments back among the agents, thus coming closer to budget-balance. The proposed mechanism is strategyproof, ex post individual rational, no-deficit, and leads to an efficient outcome; we prove that among all mechanisms with these qualities and an anonymity property it is optimally balanced, in that no mechanism ever yields greater payoff to the agents. We provide a general characterization of when strategyproof redistribution is possible, and demonstrate specifically that substantial redistribution can be achieved in allocation problems.",
"Smart pricing methods using auction mechanism allow more information exchange between users and providers, and they can meet users' energy demand at a low cost of grid operation, which contributes to the economic and environmental benefit in smart grid. However, when asked to report their energy demand, users may have an incentive to cheat in order to consume more while paying less, causing extra costs for grid operation. So it is important to ensure truthfulness among users for demand side management. In this paper, we propose an efficient pricing method that can prevent users' cheating. In the proposed model, the smart meter can record user's consumption information and communicate with the energy provider's terminal. Users' preferences and consumption patterns are modeled in form of a utility function. Based on this, we propose an enhanced AGV (Arrow-d'Aspremont-Gerard-Varet) mechanism to ensure truthfulness. In this incentive method, user's payment is related to its consumption credit. One will be punished to pay extra if there is a cheat record in its consumption history. We prove that the enhanced AGV mechanism can achieve the basic qualifications: incentive compatibility, individual rationality and budget balance. Simulation results confirm that the enhanced AGV mechanism can ensure truth-telling, and benefit both users and energy providers.",
"This paper is about allocation of an infinitely divisible good to several rational and strategic agents. The allocation is done by a social planner who has limited information because the agents’ valuation functions are taken to be private information known only to the respective agents. We allow only a scalar signal, called a bid, from each agent to the social planner. Yang and Hajek [Yang, S., Hajek, B., 2007. “VCG-Kelly mechanisms for allocation of divisible goods: Adapting VCG mechanisms to one-dimensional signals”, IEEE Journal on Selected Areas in Communications 25 (6), 1237–1243.] and Johari and Tsitsiklis [Johari, R., Tsitsiklis, J. N., 2009. “Efficiency of scalar-parameterized mechanisms”, Operations Research 57 (4), 823–839.] proposed a scalar strategy Vickrey–Clarke–Groves (SSVCG) mechanism with efficient Nash equilibria. We consider a setting where the social planner desires minimal budget surplus. Example situations include fair sharing of Internet resources and auctioning of certain public goods where revenue maximization is not a consideration. Under the SSVCG framework, we propose a mechanism that is efficient and comes close to budget balance by returning much of the payments back to the agents in the form of rebates. We identify a design criterion for almost budget balance, impose feasibility and voluntary participation constraints, simplify the constraints, and arrive at a convex optimization problem to identify the parameters of the rebate functions. The convex optimization problem has a linear objective function and a continuum of linear constraints. We propose a solution method that involves a finite number of constraints, and identify the number of samples sufficient for a good approximation."
]
} |
1901.01978 | 2909236140 | While the classic Vickrey-Clarke-Groves mechanism ensures incentive compatibility for a static one-shot game, it does not appear to be feasible to construct a dominant truth-telling mechanism for agents that are stochastic dynamic systems. The contribution of this paper is to show that for a set of LQG agents a mechanism consisting of a sequence of layered payments over time decouples the intertemporal effect of current bids on future payoffs and ensures truth-telling of dynamic states by their agents, if system parameters are known and agents are rational. Additionally, it is shown that there is a "Scaled" VCG mechanism that simultaneously satisfies incentive compatibility, social efficiency, budget balance as well as individual rationality under a certain "market power balance" condition where no agent is too negligible or too dominant. A further desirable property is that the SVCG payments converge to the Lagrange payments, the payments corresponding to the true price in the absence of strategic considerations, as the number of agents in the market increases. For LQ but non-Gaussian agents the optimal social welfare over the class of linear control laws is achieved. | The problem of how to conduct bidding to achieve social welfare optimality in stochastic dynamic systems is examined in @cite_0 . It assumes that all agents are price-takers. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2963169929"
],
"abstract": [
"A smart grid connects several electricity consumers producers, e.g., wind solar storage farms, fossil-fuel plants, industrial commercial loads, or load-serving aggregators, all modeled as stochastic dynamical systems. In each time period, each consumes supplies some electrical energy. Each such agent's utility is the benefit accrued from its consumption or the negative of its generation cost. The social welfare, the sum of all these utilities, is the total benefit accrued from all consumption minus the total cost of generation. The independent system operator is charged with maximizing the social welfare subject to total generation equalling consumption in each time period, but without the agents revealing their system states, dynamic models, utility functions, or uncertainties. It has to announce prices after interacting with agents via bid-price interactions. This paper examines the case where the agents respond in a compliant price-taking manner. It is shown that there is an iterative bid-price interaction, where agents respond to price announcements by complying to the requirement to announce their optimal responses according to their true stochastic model, or, in the case where the agents are linear quadratic Gaussian (LQG) systems, according to deterministic versions of their true stochastic models, that leads to the same global maximum value of social welfare attainable if all agents had pooled their information. In the important LQG case, the bid-price iteration is dramatically simple, exchanging only real-valued vectors of future prices and consumptions generations at each time step. Agents need not even know of the existence of other agents. DC power flow equations can also be incorporated. The results may be of broader interest vis-a-vis general equilibrium theory of economics for stochastic dynamic agents."
]
} |
1901.01978 | 2909236140 | While the classic Vickrey-Clarke-Groves mechanism ensures incentive compatibility for a static one-shot game, it does not appear to be feasible to construct a dominant truth-telling mechanism for agents that are stochastic dynamic systems. The contribution of this paper is to show that for a set of LQG agents a mechanism consisting of a sequence of layered payments over time decouples the intertemporal effect of current bids on future payoffs and ensures truth-telling of dynamic states by their agents, if system parameters are known and agents are rational. Additionally, it is shown that there is a "Scaled" VCG mechanism that simultaneously satisfies incentive compatibility, social efficiency, budget balance as well as individual rationality under a certain "market power balance" condition where no agent is too negligible or too dominant. A further desirable property is that the SVCG payments converge to the Lagrange payments, the payments corresponding to the true price in the absence of strategic considerations, as the number of agents in the market increases. For LQ but non-Gaussian agents the optimal social welfare over the class of linear control laws is achieved. | A preliminary announcement of some of these results was presented in the conference paper @cite_19 . The layered payment structure for LQG systems is mentioned there, and incentive compatibility results are presented without proofs. The present paper contains the complete proof of incentive compatibility, and further introduces the Scaled VCG mechanism for budget balance, individual rationality, and Lagrange optimality. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2963829287"
],
"abstract": [
"The classic Vickrey-Clarke-Groves (VCG) mechanism ensures incentive compatibility, i.e., that truth-telling of all agents is a dominant strategy, for a static one-shot game. However, in a dynamic environment that unfolds over time, the agents' intertemporal payoffs depend on the expected future controls and payments, and a direct extension of the VCG mechanism is not sufficient to guarantee incentive compatibility. In fact, it does not appear to be feasible to construct mechanisms that ensure the dominance of dynamic truth-telling for agents comprised of general stochastic dynamic systems. The contribution of this paper is to show that such a dynamic stochastic extension does exist for the special case of Linear-Quadratic-Gaussian (LQG) agents with a careful construction of a sequence of layered payments over time. We propose a layered version of a modified VCG mechanism for payments that decouples the intertemporal effect of current bids on future payoffs, and prove that truth-telling of dynamic states forms a dominant strategy if system parameters are known and agents are rational. An important example of a problem needing such optimal dynamic coordination of stochastic agents arises in power systems where an Independent System Operator (ISO) has to ensure balance of generation and consumption at all time instants, while ensuring social optimality (maximization of the sum of the utilities of all agents). Addressing strategic behavior is critical as the price-taking assumption on market participants may not hold in an electricity market. Agents, can lie or otherwise game the bidding system. The challenge is to determine a bidding scheme between all agents and the ISO that maximizes social welfare, while taking into account the stochastic dynamic models of agents, since renewable energy resources such as solar wind are stochastic and dynamic in nature, as are consumptions by loads which are influenced by factors such as local temperatures and thermal inertias of facilities."
]
} |
1901.01805 | 2908176280 | In this paper we address the problem of multi-cue affect recognition in challenging environments such as child-robot interaction. Towards this goal we propose a method for automatic recognition of affect that leverages body expressions alongside facial expressions, as opposed to traditional methods that usually focus only on the latter. We evaluate our methods on a challenging child-robot interaction database of emotional expressions, as well as on a database of emotional expressions by actors, and show that the proposed method achieves significantly better results when compared with the facial expression baselines, can be trained both jointly and separately, and offers us computational models for both the individual modalities, as well as for the whole body emotion. | The overwhelming majority of previous works in emotion recognition from visual cues have focused on using only faces as cues @cite_44 . Latest surveys @cite_25 @cite_37 @cite_9 highlight the need for taking into account bodily expression as additional stimuli for automatic emotion recognition systems, as well as the lack of large scale annotated data. | {
"cite_N": [
"@cite_44",
"@cite_9",
"@cite_37",
"@cite_25"
],
"mid": [
"2132664937",
"2024221294",
"2007041320",
"2784655662"
],
"abstract": [
"Why bodies? It is rather puzzling that given the massive interest in affective neuroscience in the last decade, it still seems to make sense to raise the question ‘Why bodies’ and to try to provide an answer to it, as is the goal of this article. There are now hundreds of articles on human emotion perception ranging from behavioural studies to brain imaging experiments. These experimental studies complement decades of reports on affective disorders in neurological patients and clinical studies of psychiatric populations. The most cursory glance at the literature on emotion in humans, now referred to by the umbrella term of social and affective neuroscience, shows that over 95 per cent of them have used faces as stimuli. Of the remaining 5 per cent, a few have used scenes or auditory information including human voices, music or environmental sounds. But by far the smallest number has looked into whole-body expressions. As a rough estimate, a search on PubMed today, 1 May 2009, yields 3521 hits for emotion × faces, 1003 hits for emotion × music and 339 hits for emotion × bodies. When looking in more detail, the body × emotion category in fact yields a majority of papers on well-being, nursing, sexual violence or organ donation. But the number of cognitive and affective neuroscience studies of emotional body perception as of today is lower than 20. @PARASPLIT Why then have whole bodies and bodily expressions not attracted the attention of researchers so far? The goal of this article is to contribute some elements for an answer to this question. I believe that there is something to learn from the historical neglect of bodies and bodily expressions. I will next address some historical misconceptions about whole-body perception, and in the process I intend not only to provide an impetus for this kind of work but also to contribute to a better understanding of the significance of the affective dimension of behaviour, mind and brain as seen from the vantage point of bodily communication. Subsequent sections discuss available evidence for the neurofunctional basis of facial and bodily expressions as well as neuropsychological and clinical studies of bodily expressions.",
"Thanks to the decreasing cost of whole-body sensing technology and its increasing reliability, there is an increasing interest in, and understanding of, the role played by body expressions as a powerful affective communication channel. The aim of this survey is to review the literature on affective body expression perception and recognition. One issue is whether there are universal aspects to affect expression perception and recognition models or if they are affected by human factors such as culture. Next, we discuss the difference between form and movement information as studies have shown that they are governed by separate pathways in the brain. We also review psychological studies that have investigated bodily configurations to evaluate if specific features can be identified that contribute to the recognition of specific affective states. The survey then turns to automatic affect recognition systems using body expressions as at least one input modality. The survey ends by raising open questions on data collecting, labeling, modeling, and setting benchmarks for comparing automatic recognition systems.",
"Body movements communicate affective expressions and, in recent years, computational models have been developed to recognize affective expressions from body movements or to generate movements for virtual agents or robots which convey affective expressions. This survey summarizes the state of the art on automatic recognition and generation of such movements. For both automatic recognition and generation, important aspects such as the movements analyzed, the affective state representation used, and the use of notation systems is discussed. The survey concludes with an outline of open problems and directions for future work.",
"Automatic emotion recognition has become a trending research topic in the past decade. While works based on facial expressions or speech abound, recognizing affect from body gestures remains a less explored topic. We present a new comprehensive survey hoping to boost research in the field. We first introduce emotional body gestures as a component of what is commonly known as \"body language\" and comment general aspects as gender differences and culture dependence. We then define a complete framework for automatic emotional body gesture recognition. We introduce person detection and comment static and dynamic body pose estimation methods both in RGB and 3D. We then comment the recent literature related to representation learning and emotion recognition from images of emotionally expressive gestures. We also discuss multi-modal approaches that combine speech or face with body gestures for improved emotion recognition. While pre-processing methodologies (e.g. human detection and pose estimation) are nowadays mature technologies fully developed for robust large scale analysis, we show that for emotion recognition the quantity of labelled data is scarce, there is no agreement on clearly defined output spaces and the representations are shallow and largely based on naive geometrical representations."
]
} |
1901.01805 | 2908176280 | In this paper we address the problem of multi-cue affect recognition in challenging environments such as child-robot interaction. Towards this goal we propose a method for automatic recognition of affect that leverages body expressions alongside facial expressions, as opposed to traditional methods that usually focus only on the latter. We evaluate our methods on a challenging child-robot interaction database of emotional expressions, as well as on a database of emotional expressions by actors, and show that the proposed method achieves significantly better results when compared with the facial expression baselines, can be trained both jointly and separately, and offers us computational models for both the individual modalities, as well as for the whole body emotion. | Gunes and Piccardi @cite_32 @cite_27 focused on combining handcrafted facial and body features for recognizing 12 different affective states in a subset of the FABO dataset @cite_4 which contains upper body affective recordings of 23 subjects. @cite_39 used Sobel filters combined with convolutional layers in the same database while @cite_15 used a hierarchical combination of Bidirectional long short-term memory (LSTM) and convolutional layers with body-face fusion using support vector machines. | {
"cite_N": [
"@cite_4",
"@cite_32",
"@cite_39",
"@cite_27",
"@cite_15"
],
"mid": [
"1978262206",
"2071954570",
"1871419576",
"2132680170",
"2771285237"
],
"abstract": [
"To be able to develop and test robust affective multimodal systems, researchers need access to novel databases containing representative samples of human multi-modal expressive behavior. The creation of such databases requires a major effort in the definition of representative behaviors, the choice of expressive modalities, and the collection and labeling of large amount of data. At present, public databases only exist for single expressive modalities such as facial expression analysis. There also exist a number of gesture databases of static and dynamic hand postures and dynamic hand gestures. However, there is not a readily available database combining affective face and body information in a genuine bimodal manner. Accordingly, in this paper, we present a bimodal database recorded by two high-resolution cameras simultaneously for use in automatic analysis of human nonverbal affective behavior",
"Psychological research findings suggest that humans rely on the combined visual channels of face and body more than any other channel when they make judgments about human communicative behavior. However, most of the existing systems attempting to analyze the human nonverbal behavior are mono-modal and focus only on the face. Research that aims to integrate gestures as an expression mean has only recently emerged. Accordingly, this paper presents an approach to automatic visual recognition of expressive face and upper-body gestures from video sequences suitable for use in a vision-based affective multi-modal framework. Face and body movements are captured simultaneously using two separate cameras. For each video sequence single expressive frames both from face and body are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming classification using the individual facial or bodily modality alone.",
"Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5 reported in the literature to 91.3 for a benchmark dataset on spontaneous emotion expressions.",
"Psychologists have long explored mechanisms with which humans recognize other humans' affective states from modalities, such as voice and face display. This exploration has led to the identification of the main mechanisms, including the important role played in the recognition process by the modalities' dynamics. Constrained by the human physiology, the temporal evolution of a modality appears to be well approximated by a sequence of temporal segments called onset, apex, and offset. Stemming from these findings, computer scientists, over the past 15 years, have proposed various methodologies to automate the recognition process. We note, however, two main limitations to date. The first is that much of the past research has focused on affect recognition from single modalities. The second is that even the few multimodal systems have not paid sufficient attention to the modalities' dynamics: The automatic determination of their temporal segments, their synchronization to the purpose of modality fusion, and their role in affect recognition are yet to be adequately explored. To address this issue, this paper focuses on affective face and body display, proposes a method to automatically detect their temporal segments or phases, explores whether the detection of the temporal phases can effectively support recognition of affective states, and recognizes affective states based on phase synchronization alignment. The experimental results obtained show the following: 1) affective face and body displays are simultaneous but not strictly synchronous; 2) explicit detection of the temporal phases can improve the accuracy of affect recognition; 3) recognition from fused face and body modalities performs better than that from the face or the body modality alone; and 4) synchronized feature-level fusion achieves better performance than decision-level fusion.",
"Abstract Affect presentation is periodic and multi-modal, such as through facial movements, body gestures, and so on. Studies have shown that temporal selection and multi-modal combinations may benefit affect recognition. In this article, we therefore propose a spatio-temporal fusion model that extracts spatio-temporal hierarchical features based on select expressive components. In addition, a multi-modal hierarchical fusion strategy is presented. Our model learns the spatio-temporal hierarchical features from videos by a proposed deep network, which combines a convolutional neural networks (CNN), bilateral long short-term memory recurrent neural networks (BLSTM-RNN) with principal component analysis (PCA). Our approach handles each video as a “video sentence.” It first obtains a skeleton with the temporal selection process and then segments key words with a certain sliding window. Finally, it obtains the features with a deep network comprised of a video-skeleton and video-words. Our model combines the feature level and decision level fusion for fusing the multi-modal information. Experimental results showed that our model improved the multi-modal affect recognition accuracy rate from 95.13 in existing literature to 99.57 on a face and body (FABO) database, our results have been increased by 4.44 , and it obtained a macro average accuracy (MAA) up to 99.71 ."
]
} |
1901.01805 | 2908176280 | In this paper we address the problem of multi-cue affect recognition in challenging environments such as child-robot interaction. Towards this goal we propose a method for automatic recognition of affect that leverages body expressions alongside facial expressions, as opposed to traditional methods that usually focus only on the latter. We evaluate our methods on a challenging child-robot interaction database of emotional expressions, as well as on a database of emotional expressions by actors, and show that the proposed method achieves significantly better results when compared with the facial expression baselines, can be trained both jointly and separately, and offers us computational models for both the individual modalities, as well as for the whole body emotion. | B " a @cite_42 introduced the GEMEP (GEneva Multimodal Emotion Portrayal) corpus, the CoreSet of which includes 10 actors performing 12 emotional expression. In @cite_5 , introduce a body action and posture coding system (BAP) similar to the Facial Action Coding System (FACS) which is used for coding the human facial expressions and used it subsequently in @cite_10 for classifying and analyzing body emotional expressions found in the GEMEP corpus. | {
"cite_N": [
"@cite_5",
"@cite_42",
"@cite_10"
],
"mid": [
"2034620660",
"2329093554",
"2085003717"
],
"abstract": [
"Several methods are available for coding body movement in nonverbal behavior research, but there is no consensus on a reliable coding system that can be used for the study of emotion expression. Adopting an integrative approach, we developed a new method, the body action and posture coding system, for the time-aligned micro description of body movement on an anatomical level (different articulations of body parts), a form level (direction and orientation of movement), and a functional level (communicative and self-regulatory functions). We applied the system to a new corpus of acted emotion portrayals, examined its comprehensiveness and demonstrated intercoder reliability at three levels: (a) occurrence, (b) temporal precision, and (c) segmentation. We discuss issues for further validation and propose some research applications.",
"Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.",
"Emotion communication research strongly focuses on the face and voice as expressive modalities, leaving the rest of the body relatively understudied. Contrary to the early assumption that body movement only indicates emotional intensity, recent studies have shown that body movement and posture also conveys emotion specific information. However, a deeper understanding of the underlying mechanisms is hampered by a lack of production studies informed by a theoretical framework. In this research we adopted the Body Action and Posture (BAP) coding system to examine the types and patterns of body movement that are employed by 10 professional actors to portray a set of 12 emotions. We investigated to what extent these expression patterns support explicit or implicit predictions from basic emotion theory, bidimensional theory, and componential appraisal theory. The overall results showed partial support for the different theoretical approaches. They revealed that several patterns of body movement systematically occur in portrayals of specific emotions, allowing emotion differentiation. Although a few emotions were prototypically expressed by one particular pattern, most emotions were variably expressed by multiple patterns, many of which can be explained as reflecting functional components of emotion such as modes of appraisal and action readiness. It is concluded that further work in this largely underdeveloped area should be guided by an appropriate theoretical framework to allow a more systematic design of experiments and clear hypothesis testing."
]
} |
1901.01805 | 2908176280 | In this paper we address the problem of multi-cue affect recognition in challenging environments such as child-robot interaction. Towards this goal we propose a method for automatic recognition of affect that leverages body expressions alongside facial expressions, as opposed to traditional methods that usually focus only on the latter. We evaluate our methods on a challenging child-robot interaction database of emotional expressions, as well as on a database of emotional expressions by actors, and show that the proposed method achieves significantly better results when compared with the facial expression baselines, can be trained both jointly and separately, and offers us computational models for both the individual modalities, as well as for the whole body emotion. | In @cite_17 recorded a database of 10 participants performing 8 emotions, using the same framework as the GEMEP dataset. Afterwards, they fused audio, facial and body movement features using different Bayesian classifiers for automatically recognizing the depicted emotions. In @cite_38 a two branch face-body late-fusion scheme is presented by combining handcrafted features from 3D body joints and action units detection using facial landmarks. | {
"cite_N": [
"@cite_38",
"@cite_17"
],
"mid": [
"2551247055",
"1555767263"
],
"abstract": [
"A challenging research issue, which has recently attracted a lot of attention, is the incorporation of emotion recognition technology in serious games applications, in order to improve the quality of interaction and enhance the gaming experience. To this end, in this paper, we present an emotion recognition methodology that utilizes information extracted from multimodal fusion analysis to identify the affective state of players during gameplay scenarios. More specifically, two monomodal classifiers have been designed for extracting affective state information based on facial expression and body motion analysis. For the combination of different modalities a deep model is proposed that is able to make a decision about player's affective state, while also being robust in the absence of one information cue. In order to evaluate the performance of our methodology, a bimodal database was created using Microsoft's Kinect sensor, containing feature vectors extracted from users' facial expressions and body gestures. The proposed method achieved higher recognition rate in comparison with mono-modal, as well as early-fusion algorithms. Our methodology outperforms all other classifiers, achieving an overall recognition rate of 98.3 .",
"In this paper we present a multimodal approach for the recognition of eight emotions. Our approach integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. Firstly, individual classifiers were trained for each modality. Next, data were fused at the feature level and the decision level. Fusing the multimodal data resulted in a large increase in the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10 when compared to the most successful unimodal system. Further, the fusion performed at the feature level provided better results than the one performed at the decision level."
]
} |
1901.01805 | 2908176280 | In this paper we address the problem of multi-cue affect recognition in challenging environments such as child-robot interaction. Towards this goal we propose a method for automatic recognition of affect that leverages body expressions alongside facial expressions, as opposed to traditional methods that usually focus only on the latter. We evaluate our methods on a challenging child-robot interaction database of emotional expressions, as well as on a database of emotional expressions by actors, and show that the proposed method achieves significantly better results when compared with the facial expression baselines, can be trained both jointly and separately, and offers us computational models for both the individual modalities, as well as for the whole body emotion. | Regarding the application of affect recognition in CRI, the necessity of empathy as a primary capability of social robots for the establishment of positive long-term human-robot interaction has been the research item of several studies @cite_16 @cite_43 . In @cite_11 presented a system that learned to perceive affective expressions of children playing chess with an iCat robot by processing their facial expressions. This knowledge was then used to modify the behavior of the robot which resulted in a more engaging and friendly interaction. Under the same application, @cite_20 used body motion and affective posture for detecting the engagement of children in the task. In @cite_34 , 3D human pose was used as a way of identifying actions in child-robot-therapist interaction for children with autism. In the same study the 3D pose was also successfully used for estimating the affective state of the child in the continuous arousal and valence dimensions. @PARASPLIT | {
"cite_N": [
"@cite_16",
"@cite_43",
"@cite_34",
"@cite_20",
"@cite_11"
],
"mid": [
"1985945240",
"103982469",
"2798569594",
"2105410877",
"2126554711"
],
"abstract": [
"This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.",
"As a great number of robotic products are entering people’s lives, the question of how can they behave in order to sustain long-term interactions with users becomes increasingly more relevant. In this paper, we present an empathic model for social robots that aim to interact with children for extended periods of time. The application of this model to a scenario where a social robot plays chess with children is described. To evaluate the proposed model, we conducted a long-term study in an elementary school and measured children’s perception of social presence, engagement and social support.",
"We introduce new, fine-grained action and emotion recognition tasks defined on non-staged videos, recorded during robot-assisted therapy sessions of children with autism. The tasks present several challenges: a large dataset with long videos, a large number of highly variable actions, children that are only partially visible, have different ages and may show unpredictable behaviour, as well as non-standard camera viewpoints. We investigate how state-of-the-art 3d human pose reconstruction methods perform on the newly introduced tasks and propose extensions to adapt them to deal with these challenges. We also analyze multiple approaches in action and emotion recognition from 3d human pose data, establish several baselines, and discuss results and their implications in the context of child-robot interaction.",
"The design of an affect recognition system for socially perceptive robots relies on representative data: human-robot interaction in naturalistic settings requires an affect recognition system to be trained and validated with contextualised affective expressions, that is, expressions that emerge in the same interaction scenario of the target application. In this paper we propose an initial computational model to automatically analyse human postures and body motion to detect engagement of children playing chess with an iCat robot that acts as a game companion. Our approach is based on vision-based automatic extraction of expressive postural features from videos capturing the behaviour of the children from a lateral view. An initial evaluation, conducted by training several recognition models with contextualised affective postural expressions, suggests that patterns of postural behaviour can be used to accurately predict the engagement of the children with the robot, thus making our approach suitable for integration into an affect recognition system for a game companion in a real world scenario.",
"Affect recognition for socially perceptive robots relies on representative data. While many of the existing affective corpora and databases contain posed and decontextualized affective expressions, affect resources for designing an affect recognition system in naturalistic human–robot interaction (HRI) must include context-rich expressions that emerge in the same scenario of the final application. In this paper, we propose a context-based approach to the collection and modeling of representative data for building an affect-sensitive robotic game companion. To illustrate our approach we present the key features of the Inter-ACT (INTEracting with Robots–Affect Context Task) corpus, an affective and contextually rich multimodal video corpus containing affective expressions of children playing chess with an iCat robot. We show how this corpus can be successfully used to train a context-sensitive affect recognition system (a valence detector) for a robotic game companion. Finally, we demonstrate how the integration of the affect recognition system in a modular platform for adaptive HRI makes the interaction with the robot more engaging."
]
} |
1901.01760 | 2907137919 | We explore the importance of spatial contextual information in human pose estimation. Most state-of-the-art pose networks are trained in a multi-stage manner and produce several auxiliary predictions for deep supervision. With this principle, we present two conceptually simple and yet computational efficient modules, namely Cascade Prediction Fusion (CPF) and Pose Graph Neural Network (PGNN), to exploit underlying contextual information. Cascade prediction fusion accumulates prediction maps from previous stages to extract informative signals. The resulting maps also function as a prior to guide prediction at following stages. To promote spatial correlation among joints, our PGNN learns a structured representation of human pose as a graph. Direct message passing between different joints is enabled and spatial relation is captured. These two modules require very limited computational complexity. Experimental results demonstrate that our method consistently outperforms previous methods on MPII and LSP benchmark. | The key of human pose estimation lies in joint detection and spatial relation configuration. Previous human pose estimation methods can be divided into two groups. The first is to learn feature representation using powerful CNN. These methods detect body joint location directly or predict the score maps for body joints. Early methods like DeepPose @cite_41 regressed joint locations with multiple stages. Later, Fan al @cite_25 combined local and global features to improve performance. To connect the input and output space, Carreira al @cite_20 iteratively concatenated the input image with previous prediction in each step. Following the paradigm of semantic segmentation @cite_51 @cite_54 , methods of @cite_10 @cite_19 @cite_8 used Gaussian peaks to represent part locations. Then a fully convolutional neural network @cite_51 is applied to estimate body joint location. These methods can produce high quality representation and do not predict structure among body joints, however. | {
"cite_N": [
"@cite_8",
"@cite_41",
"@cite_10",
"@cite_54",
"@cite_19",
"@cite_51",
"@cite_25",
"@cite_20"
],
"mid": [
"2742737904",
"2113325037",
"2307770531",
"2412782625",
"2964304707",
"1903029394",
"1948369226",
"2963474899"
],
"abstract": [
"Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multibranch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at https: github.com bearpaw PyraNet.",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.",
"Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation."
]
} |
1901.01760 | 2907137919 | We explore the importance of spatial contextual information in human pose estimation. Most state-of-the-art pose networks are trained in a multi-stage manner and produce several auxiliary predictions for deep supervision. With this principle, we present two conceptually simple and yet computational efficient modules, namely Cascade Prediction Fusion (CPF) and Pose Graph Neural Network (PGNN), to exploit underlying contextual information. Cascade prediction fusion accumulates prediction maps from previous stages to extract informative signals. The resulting maps also function as a prior to guide prediction at following stages. To promote spatial correlation among joints, our PGNN learns a structured representation of human pose as a graph. Direct message passing between different joints is enabled and spatial relation is captured. These two modules require very limited computational complexity. Experimental results demonstrate that our method consistently outperforms previous methods on MPII and LSP benchmark. | The other group focuses on modeling spatial relationship between body joints. The pictorial structures @cite_9 modeled spatial deformation by designing pairwise terms between different joints. To deal with human poses with large variation, a mixture model is learned for each joint. Yang al @cite_52 used a part mixture model to infer spatial relation with a tree structure. This structure may not capture very complicated relation. Subsequent methods introduced other models, such as loopy structure @cite_33 and poselet @cite_9 to further improve the performance. | {
"cite_N": [
"@cite_9",
"@cite_52",
"@cite_33"
],
"mid": [
"2097151019",
"1994529670",
"2049768550"
],
"abstract": [
"In this paper we consider the challenging problem of articulated human pose estimation in still images. We observe that despite high variability of the body articulations, human motions and activities often simultaneously constrain the positions of multiple body parts. Modelling such higher order part dependencies seemingly comes at a cost of more expensive inference, which resulted in their limited use in state-of-the-art methods. In this paper we propose a model that incorporates higher order part dependencies while remaining efficient. We achieve this by defining a conditional model in which all body parts are connected a-priori, but which becomes a tractable tree-structured pictorial structures model once the image observations are available. In order to derive a set of conditioning variables we rely on the poselet-based features that have been shown to be effective for people detection but have so far found limited application for articulated human pose estimation. We demonstrate the effectiveness of our approach on three publicly available pose estimation benchmarks improving or being on-par with state of the art in each case.",
"We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50 while being orders of magnitude faster.",
"We consider the problem of human parsing with part-based models. Most previous work in part-based models only considers rigid parts (e.g. torso, head, half limbs) guided by human anatomy. We argue that this representation of parts is not necessarily appropriate for human parsing. In this paper, we introduce hierarchical poselets–a new representation for human parsing. Hierarchical poselets can be rigid parts, but they can also be parts that cover large portions of human bodies (e.g. torso + left arm). In the extreme case, they can be the whole bodies. We develop a structured model to organize poselets in a hierarchical way and learn the model parameters in a max-margin framework. We demonstrate the superior performance of our proposed approach on two datasets with aggressive pose variations."
]
} |
1901.01760 | 2907137919 | We explore the importance of spatial contextual information in human pose estimation. Most state-of-the-art pose networks are trained in a multi-stage manner and produce several auxiliary predictions for deep supervision. With this principle, we present two conceptually simple and yet computational efficient modules, namely Cascade Prediction Fusion (CPF) and Pose Graph Neural Network (PGNN), to exploit underlying contextual information. Cascade prediction fusion accumulates prediction maps from previous stages to extract informative signals. The resulting maps also function as a prior to guide prediction at following stages. To promote spatial correlation among joints, our PGNN learns a structured representation of human pose as a graph. Direct message passing between different joints is enabled and spatial relation is captured. These two modules require very limited computational complexity. Experimental results demonstrate that our method consistently outperforms previous methods on MPII and LSP benchmark. | Later methods @cite_36 @cite_27 modeled structures via CNN. Tompson al @cite_27 utilized the Markov Random Field (MRF) to model distribution of body parts. Convolutional priors were used in defining the pairwise terms of joints. The method of @cite_53 utilized geometrical transform kernels to capture relation of joints on feature maps. | {
"cite_N": [
"@cite_36",
"@cite_27",
"@cite_53"
],
"mid": [
"1936750108",
"2136391815",
"2330154883"
],
"abstract": [
"Recent state-of-the-art performance on human-body pose estimation has been achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet architectures include pooling and sub-sampling layers which reduce computational requirements, introduce invariance and prevent over-training. These benefits of pooling come at the cost of reduced localization accuracy. We introduce a novel architecture which includes an efficient ‘position refinement’ model that is trained to estimate the joint offset location within a small region of the image. This refinement model is jointly trained in cascade with a state-of-the-art ConvNet model [21] to achieve improved accuracy in human joint location estimation. We show that the variance of our detector approaches the variance of human annotations on the FLIC [20] dataset and outperforms all existing approaches on the MPII-human-pose dataset [1].",
"This paper proposes a new hybrid architecture that consists of a deep Convolu-tional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques.",
"In this paper, we propose a structured feature learning framework to reason the correlations among body joints at the feature level in human pose estimation. Different from existing approaches of modeling structures on score maps or predicted labels, feature maps preserve substantially richer descriptions of body joints. The relationships between feature maps of joints are captured with the introduced geometrical transform kernels, which can be easily implemented with a convolution layer. Features and their relationships are jointly learned in an end-to-end learning system. A bi-directional tree structured model is proposed, so that the feature channels at a body joint can well receive information from other joints. The proposed framework improves feature learning substantially. With very simple post processing, it reaches the best mean PCP on the LSP and FLIC datasets. Compared with the baseline of learning features at each joint separately with ConvNet, the mean PCP has been improved by 18 on FLIC. The code is released to the public. 1"
]
} |
1901.01760 | 2907137919 | We explore the importance of spatial contextual information in human pose estimation. Most state-of-the-art pose networks are trained in a multi-stage manner and produce several auxiliary predictions for deep supervision. With this principle, we present two conceptually simple and yet computational efficient modules, namely Cascade Prediction Fusion (CPF) and Pose Graph Neural Network (PGNN), to exploit underlying contextual information. Cascade prediction fusion accumulates prediction maps from previous stages to extract informative signals. The resulting maps also function as a prior to guide prediction at following stages. To promote spatial correlation among joints, our PGNN learns a structured representation of human pose as a graph. Direct message passing between different joints is enabled and spatial relation is captured. These two modules require very limited computational complexity. Experimental results demonstrate that our method consistently outperforms previous methods on MPII and LSP benchmark. | Previous work on feature learning for graph-structure can be divided into two categories. One direction is to apply CNN to graphs. Based on graph Laplacian, methods of @cite_50 @cite_39 @cite_26 applied CNN to spectral domain. In order to operate CNN directly on graph, the method of @cite_21 used a special hash function. The other line focuses on recurrently applying neural networks to each node of the graph. State of each node can be updated based on history and new information passing through edges. The Graph Neural Network (GNN) was first proposed in @cite_5 . It utilized multi-layer perceptrons (MLP) to learn hidden state of nodes in graphs. However, the contraction map assumption is a restriction. As an extension, the Gated Graph Neural Network (GGNN) @cite_7 adopted recurrent gating function @cite_16 to update the hidden state and output sequences. Parameters of the final model can be effectively optimized by the back-propagation through time (BPTT) algorithm. Very recently, GGNNs was used in image classification @cite_17 , situation recognition @cite_40 and RGBD semantic segmentation @cite_48 . | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_48",
"@cite_21",
"@cite_39",
"@cite_40",
"@cite_50",
"@cite_5",
"@cite_16",
"@cite_17"
],
"mid": [
"2964015378",
"2950898568",
"2776622059",
"2964113829",
"2964321699",
"2963346996",
"1662382123",
"2116341502",
"2950635152",
"2581887665"
],
"abstract": [
"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",
"Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (, 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.",
"RGBD semantic segmentation requires joint reasoning about 2D appearance and 3D geometric information. In this paper we propose a 3D graph neural network (3DGNN) that builds a k-nearest neighbor graph on top of 3D point cloud. Each node in the graph corresponds to a set of points and is associated with a hidden representation vector initialized with an appearance feature extracted by a unary CNN from 2D images. Relying on recurrent functions, every node dynamically updates its hidden representation based on the current status and incoming messages from its neighbors. This propagation model is unrolled for a certain number of time steps and the final per-node representation is used for predicting the semantic class of each pixel. We use back-propagation through time to train the model. Extensive experiments on NYUD2 and SUN-RGBD datasets demonstrate the effectiveness of our approach.",
"We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.",
"In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.",
"We address the problem of recognizing situations in images. Given an image, the task is to predict the most salient verb (action), and fill its semantic roles such as who is performing the action, what is the source and target of the action, etc. Different verbs have different roles (e.g. attacking has weapon), and each role can take on many possible values (nouns). We propose a model based on Graph Neural Networks that allows us to efficiently capture joint dependencies between roles using neural networks defined on a graph. Experiments with different graph connectivities show that our approach that propagates information between roles significantly outperforms existing work, as well as multiple baselines. We obtain roughly 3-5 improvement over previous work in predicting the full situation. We also provide a thorough qualitative analysis of our model and influence of different roles in the verbs.",
"Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.",
"Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification."
]
} |
1901.02035 | 2908990317 | We propose a generalized decision-theoretic system for a heterogeneous team of autonomous agents who are tasked with online identification of phenotypically expressed stress in crop fields.. This system employs four distinct types of agents, specific to four available sensor modalities: satellites (Layer 3), uninhabited aerial vehicles (L2), uninhabited ground vehicles (L1), and static ground-level sensors (L0). Layers 3, 2, and 1 are tasked with performing image processing at the available resolution of the sensor modality and, along with data generated by layer 0 sensors, identify erroneous differences that arise over time. Our goal is to limit the use of the more computationally and temporally expensive subsequent layers. Therefore, from layer 3 to 1, each layer only investigates areas that previous layers have identified as potentially afflicted by stress. We introduce a reinforcement learning technique based on Perkins' Monte Carlo Exploring Starts for a generalized Markovian model for each layer's decision problem, and label the system the Agricultural Distributed Decision Framework (ADDF). As our domain is real-world and online, we illustrate implementations of the two major components of our system: a clustering-based image processing methodology and a two-layer POMDP implementation. | The most recent advance in precision agriculture is the FarmBeats initiative driven by Microsoft AI @cite_15 , in which a variety of network-accessible sensor modalities, including soil water potential sensors and AUAVs, are arranged to provide automated and targeted water intervention. This methodology is powerful for tackling stresses due to underwatering, but is incapable of detecting the presence of pathogens, pests and nutrient deficiency, which express themselves phenotypically. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2605165679"
],
"abstract": [
"Data-driven techniques help boost agricultural productivity by increasing yields, reducing losses and cutting down input costs. However, these techniques have seen sparse adoption owing to high costs of manual data collection and limited connectivity solutions. In this paper, we present FarmBeats, an end-to-end IoT platform for agriculture that enables seamless data collection from various sensors, cameras and drones. FarmBeats's system design that explicitly accounts for weather-related power and Internet outages has enabled six month long deployments in two US farms."
]
} |
1901.02035 | 2908990317 | We propose a generalized decision-theoretic system for a heterogeneous team of autonomous agents who are tasked with online identification of phenotypically expressed stress in crop fields.. This system employs four distinct types of agents, specific to four available sensor modalities: satellites (Layer 3), uninhabited aerial vehicles (L2), uninhabited ground vehicles (L1), and static ground-level sensors (L0). Layers 3, 2, and 1 are tasked with performing image processing at the available resolution of the sensor modality and, along with data generated by layer 0 sensors, identify erroneous differences that arise over time. Our goal is to limit the use of the more computationally and temporally expensive subsequent layers. Therefore, from layer 3 to 1, each layer only investigates areas that previous layers have identified as potentially afflicted by stress. We introduce a reinforcement learning technique based on Perkins' Monte Carlo Exploring Starts for a generalized Markovian model for each layer's decision problem, and label the system the Agricultural Distributed Decision Framework (ADDF). As our domain is real-world and online, we illustrate implementations of the two major components of our system: a clustering-based image processing methodology and a two-layer POMDP implementation. | Concerning the goal of disease identification and intervention, the wide array of contemporary efforts leverage phenotypic expressions of stress largely via thermal detection @cite_12 and are often specific to the expression from a specific disease @cite_4 . What remains is a generalized model that encompasses the variety of stresses in a model-free way. That is, instead of seeking a particular expression, learn the correlation between erroneous growth patterns, leaf and fruit necrosis or chlorosis, leaf spots, leaf striations and wilting (as caused by stress) and the available image and environmental data. | {
"cite_N": [
"@cite_4",
"@cite_12"
],
"mid": [
"2103959917",
"2615516218"
],
"abstract": [
"Early and accurate detection and diagnosis of plant diseases are key factors in plant production and the reduction of both qualitative and quantitative losses in crop yield. Optical techniques, such as RGB imaging, multi- and hyperspectral sensors, thermography, or chlorophyll fluorescence, have proven their potential in automated, objective, and reproducible detection systems for the identification and quantification of plant diseases at early time points in epidemics. Recently, 3D scanning has also been added as an optical analysis that supplies additional information on crop plant vitality. Different platforms from proximal to remote sensing are available for multiscale monitoring of single crop organs or entire fields. Accurate and reliable detection of diseases is facilitated by highly sophisticated and innovative methods of data analysis that lead to new insights derived from sensor data for complex plant-pathogen systems. Nondestructive, sensor-based methods support and expand upon visual and or mo...",
"Abstract Precision agriculture (PA) utilizes tools and technologies to identify in-field soil and crop variability for improving farming practices and optimizing agronomic inputs. Traditionally, optical remote sensing (RS) that utilizes visible light and infrared regions of the electromagnetic spectrum has been used as an integral part of PA for crop and soil monitoring. Optical RS, however, is slow in differentiating stress levels in crops until visual symptoms become noticeable. Surface temperature is considered to be a rapid response variable that can indicate crop stresses prior to their visual symptoms. By measuring estimates of surface temperature, thermal RS has been found to be a promising tool for PA. Compared to optical RS, applications of thermal RS for PA have been limited. Until recently (i.e., before the advancement of low cost RS platforms such as unmanned aerial systems (UAVs)), the availability of high resolution thermal images was limited due to high acquisition costs. Given recent developments in UAVs, thermal images with high spatial and temporal resolutions have become available at a low cost, which has increased opportunities to understand in-field variability of crop and soil conditions useful for various agronomic decision-making. Before thermal RS is adopted as a routine tool for crop and environmental monitoring, there is a need to understand its current and potential applications as well as issues and concerns. This review focuses on current and potential applications of thermal RS in PA as well as some concerns relating to its application. The application areas of thermal RS in agriculture discussed here include irrigation scheduling, drought monitoring, crop disease detection, and mapping of soil properties, residues and tillage, field tiles, and crop maturity and yield. Some of the issues related to its application include spatial and temporal resolution, atmospheric conditions, and crop growth stages."
]
} |
1901.01977 | 2953386523 | We propose a hybrid approach aimed at improving the sample efficiency in goal-directed reinforcement learning. We do this via a two-step mechanism where firstly, we approximate a model from Model-Free reinforcement learning. Then, we leverage this approximate model along with a notion of reachability using Mean First Passage Times to perform Model-Based reinforcement learning. Built on such a novel observation, we design two new algorithms - Mean First Passage Time based Q-Learning (MFPT-Q) and Mean First Passage Time based DYNA (MFPT-DYNA), that have been fundamentally modified from the state-of-the-art reinforcement learning techniques. Preliminary results have shown that our hybrid approaches converge with much fewer iterations than their corresponding state-of-the-art counterparts and therefore requiring much fewer samples and much fewer training trials to converge. | Earlier works have demonstrated various approaches to solve goal-directed reinforcement learning problem. presented a solution to solve GDRLP for an indoor unknown environment. Firstly, using temporal difference learning method, they find an initial solution to reach the goal and then improve upon the initial solution by employing a variable learning rate @cite_16 . Several earlier works also focused on goal-directed exploration as it provides insight into the corresponding RL problems. It has been proved that goal-directed exploration with RL methods can be intractable, therefore demonstrating that solving RL problems can be intractable as well @cite_19 . This work also presented that the behavior of an agent followed a random walk until it reached the goal for the first time. also studied the effect of representation and knowledge on the tractability of goal-directed exploration with RL algorithms @cite_15 . However, in our work, the primary focus is on the second stage of GDRLP, where the aim is to accelerate the convergence to optimal policy via model characeterization. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_16"
],
"mid": [
"2052117683",
"1982948368",
"1858246706"
],
"abstract": [
"This article considers adaptive control architectures that integrate active sensory-motor systems with decision systems based on reinforcement learning. One unavoidable consequence of active perception is that the agent's internal representation often confounds external world states. We call this phoenomenon perceptual aliasing and show that it destabilizes existing reinforcement learning algorithms with respect to the optimal decision policy. We then describe a new decision system that overcomes these difficulties for a restricted class of decision problems. The system incorporates a perceptual subcycle within the overall decision cycle and uses a modified learning algorithm to suppress the effects of perceptual aliasing. The result is a control architecture that learns not only how to solve a task but also where to focus its visual attention in order to collect necessary sensory information.",
"We analyze the complexity of on-line reinforcement-learning algorithms applied to goal-directed exploration tasks. Previous work had concluded that, even in deterministic state spaces, initially uninformed reinforcement learning was at least exponential for such problems, or that it was of polynomial worst-case time-complexity only if the learning methods were augmented. We prove that, to the contrary, the algorithms are tractable with only a simple change in the reward structure (“penalizing the agent for action executions”) or in the initialization of the values that they maintain. In particular, we provide tight complexity bounds for both Watkins’ Q-learning and Heger’s Q-hat-learning and show how their complexity depends on properties of the state spaces. We also demonstrate how one can decrease the complexity even further by either learning action models or utilizing prior knowledge of the topology of the state spaces. Our results provide guidance for empirical reinforcement-learning researchers on how to distinguish hard reinforcement-learning problems from easy ones and how to represent them in a way that allows them to be solved efficiently.",
"This paper proposes and implements a reinforcement learning algorithm for an agent that can learn to navigate in an indoor and initially unknown environment. The agent learns a trajectory between an initial and a goal state through interactions with the environment. The environmental knowledge is encoded in two surfaces: the reward and the penalty surfaces. The former deals primarily with planning to reach the goal whilst the latter deals mainly with reaction to avoid obstacles. Temporal difference learning is the chosen strategy to construct both surfaces. The proposed algorithm is tested for different environments and types of obstacles. The simulation results suggest that the agent is able to reach a target from any point within the environment, avoiding local minimum points. Furthermore, the agent can improve an initial solution, employing a variable learning rate, through multiple visits to the spatial positions."
]
} |
1901.01977 | 2953386523 | We propose a hybrid approach aimed at improving the sample efficiency in goal-directed reinforcement learning. We do this via a two-step mechanism where firstly, we approximate a model from Model-Free reinforcement learning. Then, we leverage this approximate model along with a notion of reachability using Mean First Passage Times to perform Model-Based reinforcement learning. Built on such a novel observation, we design two new algorithms - Mean First Passage Time based Q-Learning (MFPT-Q) and Mean First Passage Time based DYNA (MFPT-DYNA), that have been fundamentally modified from the state-of-the-art reinforcement learning techniques. Preliminary results have shown that our hybrid approaches converge with much fewer iterations than their corresponding state-of-the-art counterparts and therefore requiring much fewer samples and much fewer training trials to converge. | Another interesting research direction focused on reducing the size of problem space in MB approaches. proposed structured reachability analysis of MDPs in order to remove variables from problem description, thereby reducing the size of MDP and eventually making it easier to solve @cite_14 . It is therefore very intuitive to investigate approaches that combine the advantages of MF and MB methods @cite_9 @cite_10 . There have also been multiple previous works that combined the two paradigms. The primary objective of such methods were to speed-up the learning process for MF reinforcement learning approaches. A broad area of research including the DYNA framework @cite_12 @cite_27 leveraged a learned model to generate synthetic experience for MF learning. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_27",
"@cite_10",
"@cite_12"
],
"mid": [
"1545472701",
"2604726708",
"1595634327",
"2756350131",
"1491843047"
],
"abstract": [
"Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more feasible. We develop a family of algorithms for structured reachability analysis of MDPs that are suitable when an initial state (or set of states) is known. Usin compact, structured representations of MDPs (e.g., Bayesian networks), our methods, which vary in the tradeoff between complexity and accurac roduce structured descriptions of (estimated) reacpagle states that can be used to eliminate variables oy variable values from the problem description, reducing the size of the MDP and making it easier to solve. One contribution of our work is the extension of ideas from GRAPHPLAN to deal with the distributed nature of action reoresentations typically embodied within Bayes nets and the problem of correlated action effects. We also demonstrate that our algorithm can be made more complete by using k-ary constraints instead of binary constraints. Another contribution is the illustration of how the compact representation of reachability constraints can be exploited by several existing (exact and approximate) abstraction algorithms for MDPs.",
"Reinforcement learning algorithms for real-world robotic applications must be able to handle complex, unknown dynamical systems while maintaining data-efficient learning. These requirements are handled well by model-free and model-based RL approaches, respectively. In this work, we aim to combine the advantages of these approaches. By focusing on time-varying linear-Gaussian policies, we enable a model-based algorithm based on the linear-quadratic regulator that can be integrated into the model-free framework of path integral policy improvement. We can further combine our method with guided policy search to train arbitrary parameterized policies such as deep neural networks. Our simulation and real-world experiments demonstrate that this method can solve challenging manipulation tasks with comparable or better performance than model-free methods while maintaining the sample efficiency of model-based methods.",
"Abstract This paper presents the basic results and ideas of dynamic programming as they relate most directly to the concerns of planning in AI. These form the theoretical basis for the incremental planning methods used in the integrated architecture Dyna. These incremental planning methods are based on continually updating an evaluation function and the situation-action mapping of a reactive system. Actions are generated by the reactive system and thus involve minimal delay, while the incremental planning process guarantees that the actions and evaluation function will eventually be optimal—no matter how extensive a search is required. These methods are well suited to stochastic tasks and to tasks in which a complete and accurate model is not available. For tasks too large to implement the situation-action mapping as a table, supervised-learning methods must be used, and their capabilities remain a significant limitation of the approach.",
"Reinforcement Learning is divided in two main paradigms: model-free and model-based. Each of these two paradigms has strengths and limitations, and has been successfully applied to real world domains that are appropriate to its corresponding strengths. In this paper, we present a new approach aimed at bridging the gap between these two paradigms. We aim to take the best of the two paradigms and combine them in an approach that is at the same time data-efficient and cost-savvy. We do so by learning a probabilistic dynamics model and leveraging it as a prior for the intertwined model-free optimization. As a result, our approach can exploit the generality and structure of the dynamics model, but is also capable of ignoring its inevitable inaccuracies, by directly incorporating the evidence provided by the direct observation of the cost. Preliminary results demonstrate that our approach outperforms purely model-based and model-free approaches, as well as the approach of simply switching from a model-based to a model-free setting.",
"This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments."
]
} |
1901.01977 | 2953386523 | We propose a hybrid approach aimed at improving the sample efficiency in goal-directed reinforcement learning. We do this via a two-step mechanism where firstly, we approximate a model from Model-Free reinforcement learning. Then, we leverage this approximate model along with a notion of reachability using Mean First Passage Times to perform Model-Based reinforcement learning. Built on such a novel observation, we design two new algorithms - Mean First Passage Time based Q-Learning (MFPT-Q) and Mean First Passage Time based DYNA (MFPT-DYNA), that have been fundamentally modified from the state-of-the-art reinforcement learning techniques. Preliminary results have shown that our hybrid approaches converge with much fewer iterations than their corresponding state-of-the-art counterparts and therefore requiring much fewer samples and much fewer training trials to converge. | Along similar direction, several prior works focused on devising a model as an initialization for MF component @cite_18 @cite_23 . One of the challenges that this leads to is the inaccuracies in the model which cause the issue of model bias. A suggested solution to overcome model bias is to directly train the dynamics in a goal-oriented manner @cite_5 @cite_6 . Our work is also motivated from this approach in order to deal with model bias. | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_6",
"@cite_23"
],
"mid": [
"2949444583",
"2072752020",
"2599719672",
"2743381431"
],
"abstract": [
"With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process. However, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them. This paper proposes an end-to-end approach for learning probabilistic machine learning models in a manner that directly captures the ultimate task-based objective for which they will be used, within the context of stochastic programming. We present three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach can outperform both traditional modeling and purely black-box policy optimization approaches in these applications.",
"Learning motion control as a unified process of designing the reference trajectory and the controller is one of the most challenging problems in robotics. The complexity of the problem prevents most of the existing optimization algorithms from giving satisfactory results. While model-based algorithms like iterative linear-quadratic-Gaussian (iLQG) can be used to design a suitable controller for the motion control, their performance is strongly limited by the model accuracy. An inaccurate model may lead to degraded performance of the controller on the physical system. Although using machine learning approaches to learn the motion control on real systems have been proven to be effective, their performance depends on good initialization. To address these issues, this paper introduces a two-step algorithm which combines the proven performance of a model-based controller with a model-free method for compensating for model inaccuracy. The first step optimizes the problem using iLQG. Then, in the second step this controller is used to initialize the policy for our PI2-01 reinforcement learning algorithm. This algorithm is a derivation of the PI2 algorithm enabling more stable and faster convergence. The performance of this method is demonstrated both in simulation and experimental results.",
"Real-world robots are becoming increasingly complex and commonly act in poorly understood environments where it is extremely challenging to model or learn their true dynamics. Therefore, it might be desirable to take a task-specific approach, wherein the focus is on explicitly learning the dynamics model which achieves the best control performance for the task at hand, rather than learning the true dynamics. In this work, we use Bayesian optimization in an active learning framework where a locally linear dynamics model is learned with the intent of maximizing the control performance, and used in conjunction with optimal control schemes to efficiently design a controller for a given task. This model is updated directly based on the performance observed in experiments on the physical system in an iterative manner until a desired performance is achieved. We demonstrate the efficacy of the proposed approach through simulations and real experiments on a quadrotor testbed.",
"Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance. Model-based algorithms, in principle, can provide for much more efficient learning, but have proven difficult to extend to expressive, high-capacity models such as deep neural networks. In this work, we demonstrate that medium-sized neural network models can in fact be combined with model predictive control (MPC) to achieve excellent sample complexity in a model-based reinforcement learning algorithm, producing stable and plausible gaits to accomplish various complex locomotion tasks. We also propose using deep neural network dynamics models to initialize a model-free learner, in order to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. We empirically demonstrate on MuJoCo locomotion tasks that our pure model-based approach trained on just random action data can follow arbitrary trajectories with excellent sample efficiency, and that our hybrid algorithm can accelerate model-free learning on high-speed benchmark tasks, achieving sample efficiency gains of 3-5x on swimmer, cheetah, hopper, and ant agents. Videos can be found at this https URL"
]
} |
1901.01769 | 2908459497 | The first six months of 2018 have seen cryptocurrency thefts of $761 million, and the technology is also the latest and greatest tool for money laundering. This increase in crime has caused both researchers and law enforcement to look for ways to trace criminal proceeds. Although tracing algorithms have improved recently, they still yield an enormous amount of data of which very few datapoints are relevant or interesting to investigators, let alone ordinary bitcoin owners interested in provenance. In this work we describe efforts to visualize relevant data on a blockchain. To accomplish this we come up with a graphical model to represent the stolen coins and then implement this using a variety of visualization techniques. | A more mature system was BitConeView, presented by Battista and Donato in 2015 @cite_4 . This was among the first to provide a sensible GUI to inspect how a particular UTXO propagated through the network. In order to explain what it means for money to move, the authors came up with purity' -- basically a version of haircut tainting. They only evaluated the usability of their system informally, and came to the conclusion that more improvements were necessary to the way purity was presented to the user. | {
"cite_N": [
"@cite_4"
],
"mid": [
"1930907635"
],
"abstract": [
"Bitcoin is a digital currency whose transactions are stored into a public ledger, called blockchain, that can be viewed as a directed graph with more than 70 million nodes, where each node represents a transaction and each edge represents Bitcoins flowing from one transaction to another one. We describe a system for the visual analysis of how and when a flow of Bitcoins mixes with other flows in the transaction graph. Such a system relies on high-level metaphors for the representation of the graph and the size and characteristics of transactions, allowing for high level analysis of big portions of it."
]
} |
1901.01769 | 2908459497 | The first six months of 2018 have seen cryptocurrency thefts of $761 million, and the technology is also the latest and greatest tool for money laundering. This increase in crime has caused both researchers and law enforcement to look for ways to trace criminal proceeds. Although tracing algorithms have improved recently, they still yield an enormous amount of data of which very few datapoints are relevant or interesting to investigators, let alone ordinary bitcoin owners interested in provenance. In this work we describe efforts to visualize relevant data on a blockchain. To accomplish this we come up with a graphical model to represent the stolen coins and then implement this using a variety of visualization techniques. | devised a graph visualization of blockchain that allowed them to detect laundering activity and several denial-of-service attacks @cite_6 . Unlike previous approaches, they made use of top-down system-wide visualization to understand transaction patterns. The follow-up paper from proposed an extension to a global view, in which graph analysis is aided by human intuition @cite_11 . | {
"cite_N": [
"@cite_6",
"@cite_11"
],
"mid": [
"2429791812",
"2568736592"
],
"abstract": [
"Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpec...",
"Graphical abstractDisplay Omitted HighlightsBetter graph-sensemaking through fuzzy queries and large-scale visualisation.Fuzzy logic to identify interesting nodes.Illustrative examples of such data exploration provided. This work presents three case-studies of how fuzzy logic can be combined with large-scale immersive visualisation to enhance the process of graph sensemaking, enabling interactive fuzzy filtering of large global views of graphs. The aim is to provide users a mechanism to quickly identify interesting nodes for further analysis. Fuzzy logic allows a flexible framework to ask human-like curiosity-driven questions over the data, and visualisation allows its communication and understanding. Together, these two technologies successfully empower novices and experts to a faster and deeper understanding of the underlying patterns in big datasets compared to traditional means in a desktop screen with crisp queries. Among other examples, we provide evidence of how these two technologies successfully enable the identification of relevant transaction patterns in the Bitcoin network."
]
} |
1901.01769 | 2908459497 | The first six months of 2018 have seen cryptocurrency thefts of $761 million, and the technology is also the latest and greatest tool for money laundering. This increase in crime has caused both researchers and law enforcement to look for ways to trace criminal proceeds. Although tracing algorithms have improved recently, they still yield an enormous amount of data of which very few datapoints are relevant or interesting to investigators, let alone ordinary bitcoin owners interested in provenance. In this work we describe efforts to visualize relevant data on a blockchain. To accomplish this we come up with a graphical model to represent the stolen coins and then implement this using a variety of visualization techniques. | Unlike BitConduit and similar systems, we are not doing any actor characterization in our visualisation tool @cite_9 . The generation of graph colours is exogenous, relying on external theft reports or of software that analyses patterns of mixes, ransomware and other undesirable activity. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2732369360"
],
"abstract": [
"BitConduite is a system we are developing for the visual exploration of financial activity on the Bitcoin network. Bitcoin is the largest digital pseudo-currency worldwide and its study is of increasing interest and importance to economists, bankers, policymakers, and law enforcement authorities. All financial transactions in Bitcoin are available in an openly accessible online ledger—the (Bitcoin) blockchain. Yet, the open data does not lend itself easily to an analysis of how different individuals and institutions—or entities on the network—actually use Bitcoin. Our system BitConduite offers a data transformation back end that gives us an entity-based access to the blockchain data and a visualization front end that supports a novel high-level view on transactions over time. In particular, it facilitates the exploration of activity through filtering and clustering interactions. We are developing our system with experts in economics and will conduct a formal user study to assess our approach of Bitcoin activity analysis."
]
} |
1901.01939 | 2907034307 | The main goal of network pruning is imposing sparsity on the neural network by increasing the number of parameters with zero value in order to reduce the architecture size and the computational speedup. In most of the previous research works, sparsity is imposed stochastically without considering any prior knowledge of the weights distribution or other internal network characteristics. Enforcing too much sparsity may induce accuracy drop due to the fact that a lot of important elements might have been eliminated. In this paper, we propose Guided Attention for Sparsity Learning (GASL) to achieve (1) model compression by having less number of elements and speed-up; (2) prevent the accuracy drop by supervising the sparsity operation via a guided attention mechanism and (3) introduce a generic mechanism that can be adapted for any type of architecture; Our work is aimed at providing a framework based on interpretable attention mechanisms for imposing structured and non-structured sparsity in deep neural networks. For Cifar-100 experiments, we achieved the state-of-the-art sparsity level and 2.91x speedup with competitive accuracy compared to the best method. For MNIST and LeNet architecture we also achieved the highest sparsity and speedup level. | Network compression for parameter reduction has been of great interest for a long time and a large number of research efforts are dedicated to it. In @cite_10 @cite_50 @cite_43 @cite_16 , network pruning has been performed with a significant reduction in parameters, although they suffer from computational inefficiency due to the mere weight pruning rather than the structure pruning. | {
"cite_N": [
"@cite_43",
"@cite_16",
"@cite_10",
"@cite_50"
],
"mid": [
"2952746978",
"2952088488",
"2963674932",
"2119144962"
],
"abstract": [
"The success of deep learning in numerous application domains created the de- sire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work by (2015a) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of soft weight-sharing (Nowlan & Hinton, 1992). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle.",
"We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse solutions both in fully-connected and convolutional layers. This effect is similar to automatic relevance determination effect in empirical Bayes but has a number of advantages. We reduce the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency."
]
} |
1901.01939 | 2907034307 | The main goal of network pruning is imposing sparsity on the neural network by increasing the number of parameters with zero value in order to reduce the architecture size and the computational speedup. In most of the previous research works, sparsity is imposed stochastically without considering any prior knowledge of the weights distribution or other internal network characteristics. Enforcing too much sparsity may induce accuracy drop due to the fact that a lot of important elements might have been eliminated. In this paper, we propose Guided Attention for Sparsity Learning (GASL) to achieve (1) model compression by having less number of elements and speed-up; (2) prevent the accuracy drop by supervising the sparsity operation via a guided attention mechanism and (3) introduce a generic mechanism that can be adapted for any type of architecture; Our work is aimed at providing a framework based on interpretable attention mechanisms for imposing structured and non-structured sparsity in deep neural networks. For Cifar-100 experiments, we achieved the state-of-the-art sparsity level and 2.91x speedup with competitive accuracy compared to the best method. For MNIST and LeNet architecture we also achieved the highest sparsity and speedup level. | In @cite_35 @cite_12 @cite_26 , pruning the unimportant parts of the structure This can be neurons in fully-connected layers or channels filters in convolutional layers. rather than simple weight pruning has been proposed and significant computational speedup has been achieved. However, for the aforementioned methods, the architecture must be fully trained at first and the potential training speedup regarding the sparsity enforcement cannot be attained. A solution for training speedup has been proposed by @math -regularization technique by using online sparsification @cite_6 . Training speedup is of great importance but adding a regularization for solely speeding up the training (because of the concurrent network pruning) is not efficient due to adding a computational cost for imposing @math -regularization itself. Instead, we will use an adaptive gradient clipping @cite_28 for training speedup. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_28",
"@cite_6",
"@cite_12"
],
"mid": [
"2963243833",
"2963117513",
"1815076433",
"2771655537",
"2963000224"
],
"abstract": [
"Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.",
"Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves gener- alization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural net- works and provides structured sparsity, e.g. removes neurons and or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is com- puted in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer.",
"There are two widely known issues with properly training recurrent neural networks, the vanishing and the exploding gradient problems detailed in (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effective solution. We propose a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem. We validate empirically our hypothesis and proposed solutions in the experimental section.",
"We propose a practical method for @math norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of @math regularization. However, since the @math norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected @math norm of the resulting gated weights is differentiable with respect to the distribution parameters. We further propose the distribution for the gates, which is obtained by \"stretching\" a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.",
"High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 ."
]
} |
1901.01939 | 2907034307 | The main goal of network pruning is imposing sparsity on the neural network by increasing the number of parameters with zero value in order to reduce the architecture size and the computational speedup. In most of the previous research works, sparsity is imposed stochastically without considering any prior knowledge of the weights distribution or other internal network characteristics. Enforcing too much sparsity may induce accuracy drop due to the fact that a lot of important elements might have been eliminated. In this paper, we propose Guided Attention for Sparsity Learning (GASL) to achieve (1) model compression by having less number of elements and speed-up; (2) prevent the accuracy drop by supervising the sparsity operation via a guided attention mechanism and (3) introduce a generic mechanism that can be adapted for any type of architecture; Our work is aimed at providing a framework based on interpretable attention mechanisms for imposing structured and non-structured sparsity in deep neural networks. For Cifar-100 experiments, we achieved the state-of-the-art sparsity level and 2.91x speedup with competitive accuracy compared to the best method. For MNIST and LeNet architecture we also achieved the highest sparsity and speedup level. | In this paper, the goal is to impose the sparsity in an accurate and interpretable way using the attention mechanism. So far, attention-based deep architectures has been proposed for image @cite_39 @cite_38 @cite_15 @cite_46 and speech domains @cite_41 @cite_25 @cite_49 , as well as machine translation @cite_8 @cite_47 @cite_21 . Recently, the supervision of the attention mechanism became a subject of interest as well @cite_51 @cite_2 @cite_20 @cite_27 for which they proposed to supervise the attention using some external guidance. We propose the use of guided attention for enforcing the sparsity to map the sparsity distribution of the targeted elements Elements on which the sparsity enforcement is desired: Weights, channels, neurons and etc. to the desired target distribution. | {
"cite_N": [
"@cite_38",
"@cite_47",
"@cite_15",
"@cite_41",
"@cite_8",
"@cite_21",
"@cite_39",
"@cite_27",
"@cite_49",
"@cite_2",
"@cite_46",
"@cite_51",
"@cite_25",
"@cite_20"
],
"mid": [
"2220981600",
"2949335953",
"2147527908",
"2962826786",
"2133564696",
"2963403868",
"2737725206",
"2496749113",
"1994960488",
"2963630207",
"1514535095",
"2520028199",
"854541894",
"2469296930"
],
"abstract": [
"In this work we focus on the problem of image caption generation. We propose an extension of the long short term memory (LSTM) model, which we coin gLSTM for short. In particular, we add semantic information extracted from the image as extra input to each unit of the LSTM block, with the aim of guiding the model towards solutions that are more tightly coupled to the image content. Additionally, we explore different length normalization strategies for beam search to avoid bias towards short sentences. On various benchmark datasets such as Flickr8K, Flickr30K and MS COCO, we obtain results that are on par with or better than the current state-of-the-art.",
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.",
"Many state-of-the-art Large Vocabulary Continuous Speech Recognition (LVCSR) Systems are hybrids of neural networks and Hidden Markov Models (HMMs). Recently, more direct end-to-end methods have been investigated, in which neural architectures were trained to model sequences of characters [1,2]. To our knowledge, all these approaches relied on Connectionist Temporal Classification [3] modules. We investigate an alternative method for sequence modelling based on an attention mechanism that allows a Recurrent Neural Network (RNN) to learn alignments between sequences of input frames and output labels. We show how this setup can be applied to LVCSR by integrating the decoding RNN with an n-gram language model and by speeding up its operation by constraining selections made by the attention mechanism and by reducing the source sequence lengths by pooling information over time. Recognition accuracies similar to other HMM-free RNN-based approaches are reported for the Wall Street Journal corpus.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.",
"Recognizing fine-grained categories (e.g., bird species) is difficult due to the challenges of discriminative region localization and fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that region detection and fine-grained feature learning are mutually correlated and thus can reinforce each other. In this paper, we propose a novel recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way. The learning at each scale consists of a classification sub-network and an attention proposal sub-network (APN). The APN starts from full images, and iteratively generates region attention from coarse to fine by taking previous prediction as a reference, while the finer scale network takes as input an amplified attended region from previous scale in a recurrent way. The proposed RA-CNN is optimized by an intra-scale classification loss and an inter-scale ranking loss, to mutually learn accurate region attention and fine-grained representation. RA-CNN does not need bounding box part annotations and can be trained end-to-end. We conduct comprehensive experiments and show that RA-CNN achieves the best performance in three fine-grained tasks, with relative accuracy gains of 3.3 , 3.7 , 3.8 , on CUB Birds, Stanford Dogs and Stanford Cars, respectively.",
"In this paper, we improve the attention or alignment accuracy of neural machine translation by utilizing the alignments of training sentence pairs. We simply compute the distance between the machine attentions and the \"true\" alignments, and minimize this cost in the training procedure. Our experiments on large-scale Chinese-to-English task show that our model improves both translation and alignment qualities significantly over the large-vocabulary neural machine translation system, and even beats a state-of-the-art traditional syntax-based system.",
"We addressed the hypothesis that word segmentation based on statistical regularities occurs without the need of attention. Participants were presented with a stream of artificial speech in which the only cue to extract the words was the presence of statistical regularities between syllables. Half of the participants were asked to passively listen to the speech stream, while the other half were asked to perform a concurrent task. In Experiment 1, the concurrent task was performed on a separate auditory stream (noises), in Experiment 2 it was performed on a visual stream (pictures), and in Experiment 3 it was performed on pitch changes in the speech stream itself. Invariably, passive listening to the speech stream led to successful word extraction (as measured by a recognition test presented after the exposure phase), whereas diverted attention led to a dramatic impairment in word segmentation performance. These findings demonstrate that when attentional resources are depleted, word segmentation based on statistical regularities is seriously compromised.",
"Attention mechanisms have recently been introduced in deep learning for various tasks in natural language processing and computer vision. But despite their popularity, the correctness'' of the implicitly-learned attention maps has only been assessed qualitatively by visualization of several examples. In this paper we focus on evaluating and improving the correctness of attention in neural image captioning models. Specifically, we propose a quantitative evaluation metric for how well the attention maps align with human judgment, using recently released datasets with alignment between regions in images and entities in captions. We then propose novel models with different levels of explicit supervision for learning attention maps during training. The supervision can be strong when alignment between regions and caption entities are available, or weak when only object segments and categories are provided. We show on the popular Flickr30k and COCO datasets that introducing supervision of attention maps during training solidly improves both attention correctness and caption quality.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.",
"The attention mechanisim is appealing for neural machine translation, since it is able to dynam- ically encode a source sentence by generating a alignment between a target word and source words. Unfortunately, it has been proved to be worse than conventional alignment models in aligment accuracy. In this paper, we analyze and explain this issue from the point view of re- ordering, and propose a supervised attention which is learned with guidance from conventional alignment models. Experiments on two Chinese-to-English translation tasks show that the super- vised attention mechanism yields better alignments leading to substantial gains over the standard attention based NMT.",
"Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks including machine translation, handwriting synthesis [1,2] and image caption generation [3]. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in [2] reaches a competitive 18.7 phoneme error rate (PER) on the TIMET phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to alleviate this issue. The new method yields a model that is robust to long inputs and achieves 18 PER in single utterances and 20 in 10-times longer (repeated) utterances. Finally, we propose a change to the attention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6 level.",
"In this paper, we propose an effective way for biasing the attention mechanism of a sequence-to-sequence neural machine translation (NMT) model towards the well-studied statistical word alignment models. We show that our novel guided alignment training approach improves translation quality on real-life e-commerce texts consisting of product titles and descriptions, overcoming the problems posed by many unknown words and a large type token ratio. We also show that meta-data associated with input texts such as topic or category information can significantly improve translation quality when used as an additional signal to the decoder part of the network. With both novel features, the BLEU score of the NMT system on a product title set improves from 18.6 to 21.3 . Even larger MT quality gains are obtained through domain adaptation of a general domain NMT system to e-commerce data. The developed NMT system also performs well on the IWSLT speech translation task, where an ensemble of four variant systems outperforms the phrase-based baseline by 2.1 BLEU absolute."
]
} |
1901.01575 | 2907202993 | Subject matching performance in iris biometrics is contingent upon fast, high-quality iris segmentation. In many cases, iris biometrics acquisition equipment takes a number of images in sequence and combines the segmentation and matching results for each image to strengthen the result. To date, segmentation has occurred in 2D, operating on each image individually. But such methodologies, while powerful, do not take advantage of potential gains in performance afforded by treating sequential images as volumetric data. As a first step in this direction, we apply the Flexible Learning-Free Reconstructoin of Neural Volumes (FLoRIN) framework, an open source segmentation and reconstruction framework originally designed for neural microscopy volumes, to volumetric segmentation of iris videos. Further, we introduce a novel dataset of near-infrared iris videos, in which each subject's pupil rapidly changes size due to visible-light stimuli, as a test bed for FLoRIN. We compare the matching performance for iris masks generated by FLoRIN, deep-learning-based (SegNet), and Daugman's (OSIRIS) iris segmentation approaches. We show that by incorporating volumetric information, FLoRIN achieves a factor of 3.6 to an order of magnitude increase in throughput with only a minor drop in subject matching performance. We also demonstrate that FLoRIN-based iris segmentation maintains this speedup on low-resource hardware, making it suitable for embedded biometrics systems. | As the first step of iris recognition, iris segmentation has been approached in a variety of ways. Early and, for many years, dominant methods relied upon the circular structure of both the pupil and the iris, as proposed by Daugman @cite_5 . In addition to circular approximations of iris boundaries, those methods detected eyelids, eyelashes, specular reflections, and excluded these occlusions from feature matching. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2102796633"
],
"abstract": [
"A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person's face is the detailed texture of each eye's iris. The visible texture of a person's iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte \"iris code\". Statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical \"cross-over\" error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about 10 sup 31 . >"
]
} |
1901.01575 | 2907202993 | Subject matching performance in iris biometrics is contingent upon fast, high-quality iris segmentation. In many cases, iris biometrics acquisition equipment takes a number of images in sequence and combines the segmentation and matching results for each image to strengthen the result. To date, segmentation has occurred in 2D, operating on each image individually. But such methodologies, while powerful, do not take advantage of potential gains in performance afforded by treating sequential images as volumetric data. As a first step in this direction, we apply the Flexible Learning-Free Reconstructoin of Neural Volumes (FLoRIN) framework, an open source segmentation and reconstruction framework originally designed for neural microscopy volumes, to volumetric segmentation of iris videos. Further, we introduce a novel dataset of near-infrared iris videos, in which each subject's pupil rapidly changes size due to visible-light stimuli, as a test bed for FLoRIN. We compare the matching performance for iris masks generated by FLoRIN, deep-learning-based (SegNet), and Daugman's (OSIRIS) iris segmentation approaches. We show that by incorporating volumetric information, FLoRIN achieves a factor of 3.6 to an order of magnitude increase in throughput with only a minor drop in subject matching performance. We also demonstrate that FLoRIN-based iris segmentation maintains this speedup on low-resource hardware, making it suitable for embedded biometrics systems. | Subsequent approaches departed from circular approximations, and in many cases were inspired by Daugman's suggestion to use a Fourier series to provide a more complex model of the iris boundary @cite_24 . Arvacheh and Tizhoosh @cite_26 proposed using active contour models to detect the pupillary and limbic boundaries of the iris, taking into consideration the fact that neither the pupil nor the iris are perfectly circular, but are near-circular. They also introduced an algorithm that iteratively detects and excludes eyelid areas that cover the iris. In another approach, Shah and Ross @cite_12 proposed a method for extracting the iris using geodesic active contours to locate the boundaries of the iris in a given image. They focused on finding the boundaries of the iris, taking into consideration the fact that the iris is not necessarily circular due to the presence of these occlusions. | {
"cite_N": [
"@cite_24",
"@cite_26",
"@cite_12"
],
"mid": [
"2167075312",
"2040322660",
"2163155631"
],
"abstract": [
"This paper presents the following four advances in iris recognition: 1) more disciplined methods for detecting and faithfully modeling the iris inner and outer boundaries with active contours, leading to more flexible embedded coordinate systems; 2) Fourier-based methods for solving problems in iris trigonometry and projective geometry, allowing off-axis gaze to be handled by detecting it and ldquorotatingrdquo the eye into orthographic perspective; 3) statistical inference methods for detecting and excluding eyelashes; and 4) exploration of score normalizations, depending on the amount of iris data that is available in images and the required scale of database search. Statistical results are presented based on 200 billion iris cross-comparisons that were generated from 632 500 irises in the United Arab Emirates database to analyze the normalization issues raised in different regions of receiver operating characteristic curves.",
"This paper presents an active contour model to accurately detect pupil boundary in order to improve the performance of iris recognition systems. The contour model takes into consideration that an actual pupil boundary is a near-circular contour rather than a perfect circle. Two types of controlling force models, introduced as internal and external forces, are designed to properly activate the contour and locate it over the pupil boundary. The internal forces are designed to smooth the curve as well as to keep it close to a circular shape by pushing the contour vertices to their local radial mean. The external forces, which are responsible for pulling the contour vertices toward the pupil boundary, are designed based on a circular-curve gradient measurement with a proper angular range with respect to the contour center. In addition, an iterative algorithm has been developed in order to capture limbus and eyelids. The developed algorithm iteratively searches the limbus and eyelids boundaries and excludes the detected eyelids areas that cover the iris. Excluding the eyelids leads to a more precise search for limbus in the next iteration and the search is completed when the circular parameters of the limbus converge to fixed values. The eyelid contours are modeled as elliptic curves considering the spherical shape of an eyeball and the search is based on the expected contour in different degrees of eye openness.",
"The richness and apparent stability of the iris texture make it a robust biometric trait for personal authentication. The performance of an automated iris recognition system is affected by the accuracy of the segmentation process used to localize the iris structure. Most segmentation models in the literature assume that the pupillary, limbic, and eyelid boundaries are circular or elliptical in shape. Hence, they focus on determining model parameters that best fit these hypotheses. However, it is difficult to segment iris images acquired under nonideal conditions using such conic models. In this paper, we describe a novel iris segmentation scheme employing geodesic active contours (GACs) to extract the iris from the surrounding structures. Since active contours can 1) assume any shape and 2) segment multiple objects simultaneously, they mitigate some of the concerns associated with traditional iris segmentation models. The proposed scheme elicits the iris texture in an iterative fashion and is guided by both local and global properties of the image. The matching accuracy of an iris recognition system is observed to improve upon application of the proposed segmentation algorithm. Experimental results on the CASIA v3.0 and WVU nonideal iris databases indicate the efficacy of the proposed technique."
]
} |
1901.01575 | 2907202993 | Subject matching performance in iris biometrics is contingent upon fast, high-quality iris segmentation. In many cases, iris biometrics acquisition equipment takes a number of images in sequence and combines the segmentation and matching results for each image to strengthen the result. To date, segmentation has occurred in 2D, operating on each image individually. But such methodologies, while powerful, do not take advantage of potential gains in performance afforded by treating sequential images as volumetric data. As a first step in this direction, we apply the Flexible Learning-Free Reconstructoin of Neural Volumes (FLoRIN) framework, an open source segmentation and reconstruction framework originally designed for neural microscopy volumes, to volumetric segmentation of iris videos. Further, we introduce a novel dataset of near-infrared iris videos, in which each subject's pupil rapidly changes size due to visible-light stimuli, as a test bed for FLoRIN. We compare the matching performance for iris masks generated by FLoRIN, deep-learning-based (SegNet), and Daugman's (OSIRIS) iris segmentation approaches. We show that by incorporating volumetric information, FLoRIN achieves a factor of 3.6 to an order of magnitude increase in throughput with only a minor drop in subject matching performance. We also demonstrate that FLoRIN-based iris segmentation maintains this speedup on low-resource hardware, making it suitable for embedded biometrics systems. | Other methods approached iris segmentation from a localization perspective. Luengo @cite_9 proposed using mathematical morphology to extract the outer boundaries of both the pupil and the iris. They dealt with occlusions by simply removing portions of the segmentation which likely contained parts of the eyelid, eyelashes, reflections, etc. In another approach, He @cite_27 extracted the iris center from the image and from that detected the iris boundaries. The novelty of their algorithm was the use of a histogram filter to detect irregularities in the shape of the eyelids. | {
"cite_N": [
"@cite_27",
"@cite_9"
],
"mid": [
"2100603595",
"2006699888"
],
"abstract": [
"Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.",
"This paper proposes a new approach for fast iris segmentation that relies on the closed nested structures of iris anatomy (the sclera is brighter than the iris, and the iris is brighter than the pupil) and on its polar symmetry. The described method applies mathematical morphology for polar radial-invariant image filtering and for circular segmentation using shortest paths from generalized grey-level distances. The proposed algorithm obtained good results on the NICE-I contest and showed a very robust behavior, especially when dealing with half-closed eyes, different skin colours illumination or subjects wearing glasses."
]
} |
1901.01575 | 2907202993 | Subject matching performance in iris biometrics is contingent upon fast, high-quality iris segmentation. In many cases, iris biometrics acquisition equipment takes a number of images in sequence and combines the segmentation and matching results for each image to strengthen the result. To date, segmentation has occurred in 2D, operating on each image individually. But such methodologies, while powerful, do not take advantage of potential gains in performance afforded by treating sequential images as volumetric data. As a first step in this direction, we apply the Flexible Learning-Free Reconstructoin of Neural Volumes (FLoRIN) framework, an open source segmentation and reconstruction framework originally designed for neural microscopy volumes, to volumetric segmentation of iris videos. Further, we introduce a novel dataset of near-infrared iris videos, in which each subject's pupil rapidly changes size due to visible-light stimuli, as a test bed for FLoRIN. We compare the matching performance for iris masks generated by FLoRIN, deep-learning-based (SegNet), and Daugman's (OSIRIS) iris segmentation approaches. We show that by incorporating volumetric information, FLoRIN achieves a factor of 3.6 to an order of magnitude increase in throughput with only a minor drop in subject matching performance. We also demonstrate that FLoRIN-based iris segmentation maintains this speedup on low-resource hardware, making it suitable for embedded biometrics systems. | More recently, iris segmentation has moved away from these classical approaches toward machine learning oriented methods. Li @cite_6 proposed a method which uses k-means clustering to detect the outer boundary of the iris. In a similar fashion, Sahmoud and Abuhaiba @cite_19 proposed using k-means clustering to determine the region of the iris. | {
"cite_N": [
"@cite_19",
"@cite_6"
],
"mid": [
"1975071533",
"2118946293"
],
"abstract": [
"Recently, iris recognition systems have gained increased attention especially in non-cooperative environments. One of the crucial steps in the iris recognition system is the iris segmentation because it significantly affects the accuracy of the feature extraction and iris matching steps. Traditional iris segmentation methods provide excellent results when iris images are captured using near infrared cameras under ideal imaging conditions, but the accuracy of these algorithms significantly decreases when the iris images are taken in visible wavelength under non-ideal imaging conditions. In this paper, a new algorithm is proposed to segments iris images captured in visible wavelength under unconstrained environments. The proposed algorithm reduces the error percentage even in the presence of types of noise include iris obstructions and specular reflection. The proposed algorithm starts with determining the expected region of the iris using the K-means clustering algorithm. The Circular Hough Transform (CHT) is then employed in order to estimate the iris radius and center. A new efficient algorithm is developed to detect and isolate the upper eyelids. Finally, the non-iris regions are removed. Results of applying the proposed algorithm on UBIRIS iris image databases demonstrate that it improves the segmentation accuracy and time.",
"Iris segmentation plays an important role in an accurate iris recognition system. In less constrained environments where iris images are captured at-a-distance and on-the-move, iris segmentation becomes much more difficult due to the effects of significant variation of eye position and size, eyebrows, eyelashes, glasses and contact lenses, and hair, together with illumination changes and varying focus condition. This paper contributes to robust and accurate iris segmentation in very noisy images. Our main contributions are as follows: (1) we propose a limbic boundary localization algorithm that combines K-Means clustering based on the gray-level co-occurrence histogram and an improved Hough transform, and, in possible failures, a complementary method that uses skin information; the best localization between this and the former is selected. (2) An upper eyelid detection approach is presented, which combines a parabolic integro-differential operator and a RANSAC (RANdom SAmple Consensus)-like technique that utilizes edgels detected by a one-dimensional edge detector. (3) A segmentation approach is presented that exploits various techniques and different image information, following the idea of focus of attention, which progressively detects the eye, localizes the limbic and then pupillary boundaries, locates the eyelids and removes the specular highlight. The proposed method was evaluated in the UBIRIS.v2 testing database by the NICE.I organizing committee. We were ranked #4 among all participants according to the evaluation results."
]
} |
1901.01575 | 2907202993 | Subject matching performance in iris biometrics is contingent upon fast, high-quality iris segmentation. In many cases, iris biometrics acquisition equipment takes a number of images in sequence and combines the segmentation and matching results for each image to strengthen the result. To date, segmentation has occurred in 2D, operating on each image individually. But such methodologies, while powerful, do not take advantage of potential gains in performance afforded by treating sequential images as volumetric data. As a first step in this direction, we apply the Flexible Learning-Free Reconstructoin of Neural Volumes (FLoRIN) framework, an open source segmentation and reconstruction framework originally designed for neural microscopy volumes, to volumetric segmentation of iris videos. Further, we introduce a novel dataset of near-infrared iris videos, in which each subject's pupil rapidly changes size due to visible-light stimuli, as a test bed for FLoRIN. We compare the matching performance for iris masks generated by FLoRIN, deep-learning-based (SegNet), and Daugman's (OSIRIS) iris segmentation approaches. We show that by incorporating volumetric information, FLoRIN achieves a factor of 3.6 to an order of magnitude increase in throughput with only a minor drop in subject matching performance. We also demonstrate that FLoRIN-based iris segmentation maintains this speedup on low-resource hardware, making it suitable for embedded biometrics systems. | Recent advances in deep learning-based segmentation have resulted in many applications of convolutional neural network architectures to iris segmentation. Jalilian and Uhl @cite_20 proposed to use several types of convolutional encoder-decoder networks and reported better performance for deep learning-based approaches when compared to conventional algorithms such as Daugman's approach @cite_17 , WAHET @cite_13 , CAHT @cite_21 and IFPP @cite_25 . Arsalan al proposed to use a modified VGG-Face network to segment visible-light @cite_1 and near-infrared @cite_29 iris images. Other deep learning-based iris segmentation techniques include using a re-trained U-Net al @cite_16 , semi-parallel Fully Convolutional Deep Neural Networks @cite_15 , Generative Adversarial Networks @cite_0 and Mask R-CNN @cite_2 . SegNet @cite_14 was successfully re-trained by Trokielewicz al @cite_22 to segment very challenging post-mortem iris images. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"2963881378",
"2963006539",
"2802806477",
"2200208292",
"2765685479",
"2891787817",
"2952329079",
"2774581827",
"2891396062",
"2118282683",
"121937170",
"2741839910",
"1869924930"
],
"abstract": [
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .",
"This paper presents a method for segmenting iris images obtained from the deceased subjects, by training a deep convolutional neural network (DCNN) designed for the purpose of semantic segmentation. Post-mortem iris recognition has recently emerged as an alternative, or additional, method useful in forensic analysis. At the same time it poses many new challenges from the technological standpoint, one of them being the image segmentation stage, which has proven difficult to be reliably executed by conventional iris recognition methods. Our approach is based on the SegNet architecture, fine-tuned with 1,300 manually segmented post-mortem iris images taken from the Warsaw-BioBase-Post-Mortem-Iris v1.0 database. The experiments presented in this paper show that this data-driven solution is able to learn specific deformations present in post-mortem samples, which are missing from alive irises, and offers a considerable improvement over the state-of-the-art, conventional segmentation algorithm (OSIRIS): the Intersection over Union (IoU) metric was improved from 73.6 (for OSIRIS) to 83 (for DCNN-based presented in this paper) averaged over subject-disjoint, multiple splits of the data into train and test subsets. This paper offers the first known to us method of automatic processing of post-mortem iris images. We offer source codes with the trained DCNN that perform end-to-end segmentation of post-mortem iris images, as described in this paper. Also, we offer binary masks corresponding to manual segmentation of samples from Warsaw-BioBase-Post-Mortem-Iris v1.0 database to facilitate development of alternative methods for post-mortem iris segmentation.",
"The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.",
"Traditional iris processing following Daugman’s approach [116] extracts binary features after mapping the textural area between inner pupillary and outer limbic boundary into a doubly dimensionless representation.",
"Existing iris recognition systems are heavily dependent on specific conditions, such as the distance of image acquisition and the stop-and-stare environment, which require significant user cooperation. In environments where user cooperation is not guaranteed, prevailing segmentation schemes of the iris region are confronted with many problems, such as heavy occlusion of eyelashes, invalid off-axis rotations, motion blurs, and non-regular reflections in the eye area. In addition, iris recognition based on visible light environment has been investigated to avoid the use of additional near-infrared (NIR) light camera and NIR illuminator, which increased the difficulty of segmenting the iris region accurately owing to the environmental noise of visible light. To address these issues; this study proposes a two-stage iris segmentation scheme based on convolutional neural network (CNN); which is capable of accurate iris segmentation in severely noisy environments of iris recognition by visible light camera sensor. In the experiment; the noisy iris challenge evaluation part-II (NICE-II) training database (selected from the UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE) dataset were used. Experimental results showed that our method outperformed the existing segmentation methods.",
"The iris can be considered as one of the most important biometric traits due to its high degree of uniqueness. Iris-based biometrics applications depend mainly on the iris segmentation whose suitability is not robust for different environments such as near-infrared (NIR) and visible (VIS) ones. In this paper, two approaches for robust iris segmentation based on Fully Convolutional Networks (FCNs) and Generative Adversarial Networks (GANs) are described. Similar to a common convolutional network, but without the fully connected layers (i.e., the classification layers), an FCN employs at its end combination of pooling layers from different convolutional layers. Based on the game theory, a GAN is designed as two networks competing with each other to generate the best segmentation. The proposed segmentation networks achieved promising results in all evaluated datasets (i.e., BioSec, CasiaI3, CasiaT4, IITD-1) of NIR images and (NICE.I, CrEye-Iris and MICHE-I) of VIS images in both non-cooperative and cooperative domains, outperforming the baselines techniques which are the best ones found so far in the literature, i.e., a new state of the art for these datasets. Furthermore, we manually labeled 2,431 images from CasiaT4, CrEye-Iris and MICHE-I datasets, making the masks available for research purposes.",
"The extraction of consistent and identifiable features from an image of the human iris is known as iris recognition. Identifying which pixels belong to the iris, known as segmentation, is the first stage of iris recognition. Errors in segmentation propagate to later stages. Current segmentation approaches are tuned to specific environments. We propose using a convolution neural network for iris segmentation. Our algorithm is accurate when trained in a single environment and tested in multiple environments. Our network builds on the Mask R-CNN framework (, ICCV 2017). Our approach segments faster than previous approaches including the Mask R-CNN network. Our network is accurate when trained on a single environment and tested with a different sensors (either visible light or near-infrared). Its accuracy degrades when trained with a visible light sensor and tested with a near-infrared sensor (and vice versa). A small amount of retraining of the visible light model (using a few samples from a near-infrared dataset) yields a tuned network accurate in both settings. For training and testing, this work uses the Casia v4 Interval, Notre Dame 0405, Ubiris v2, and IITD datasets.",
"With the increasing imaging and processing capabilities of today's mobile devices, user authentication using iris biometrics has become feasible. However, as the acquisition conditions become more unconstrained and as image quality is typically lower than dedicated iris acquisition systems, the accurate segmentation of iris regions is crucial for these devices. In this work, an end to end Fully Convolutional Deep Neural Network (FCDNN) design is proposed to perform the iris segmentation task for lower-quality iris images. The network design process is explained in detail, and the resulting network is trained and tuned using several large public iris datasets. A set of methods to generate and augment suitable lower quality iris images from the high-quality public databases are provided. The network is trained on Near InfraRed (NIR) images initially and later tuned on additional datasets derived from visible images. Comprehensive inter-database comparisons are provided together with results from a selection of experiments detailing the effects of different tunings of the network. Finally, the proposed model is compared with SegNet-basic, and a near-optimal tuning of the network is compared to a selection of other state-of-art iris segmentation algorithms. The results show very promising performance from the optimized Deep Neural Networks design when compared with state-of-art techniques applied to the same lower quality datasets.",
"Iris segmentation is an important research topic that received significant attention from the research community over the years. Traditional iris segmentation techniques have typically been focused on hand-crafted procedures that, nonetheless, achieved remarkable segmentation performance even with images captured in difficult settings. With the success of deep-learning models, researchers are increasingly looking towards convolutional neural networks (CNNs) to further improve on the accuracy of existing iris segmentation techniques and several CNN-based techniques have already been presented recently in the literature. In this paper we also consider deep-learning models for iris segmentation and present an iris segmentation approach based on the popular U-Net architecture. Our model is trainable end-to-end and, hence, avoids the need for hand designing the segmentation procedure. We evaluate the model on the CASIA dataset and report encouraging results in comparison to existing techniques used in this area.",
"Efficient and robust segmentation of less intrusively or non-cooperatively captured iris images is still a challenging task in iris biometrics. This paper proposes a novel two-stage algorithm for the localization and mapping of iris texture in images of the human eye into Daugman's doubly dimensionless polar coordinates. Motivated by the growing demand for real-time capable solutions, coarse center detection and fine boundary localization usually combined in traditional approaches are decoupled. Therefore, search space at each stage is reduced without having to stick to simpler models. Another motivation of this work is independence of sensors. A comparison of reference software on different datasets highlights the problem of database-specific optimizations in existing solutions. This paper instead proposes the application of Gaussian weighting functions to incorporate model-specific prior knowledge. An adaptive Hough transform is applied at multiple resolutions to estimate the approximate position of the iris center. Subsequent polar transform detects the first elliptic limbic or pupillary boundary, and an ellipsopolar transform finds the second boundary based on the outcome of the first. This way, both iris images with clear limbic (typical for visible-wavelength) and with clear pupillary boundaries (typical for near infrared) can be processed in a uniform manner.",
"This paper presents a multi-stage iris segmentation framework for the localization of pupillary and limbic boundaries of human eyes. Instead of applying time-consuming exhaustive search approaches, like traditional circular Hough Transform or Daugman's integrodifferential operator, an iterative approach is used. By decoupling coarse center detection and fine boundary localization, faster processing and modular design can be achieved. This alleviates more sophisticated quality control and feedback during the segmentation process. By avoiding database-specific optimizations, this work aims at supporting different sensors and light spectra, i.e. Visible Wavelength and Near Infrared, without parameter tuning. The system is evaluated by using multiple open iris databases and it is compared to existing classical approaches.",
"As a considerable breakthrough in artificial intelligence, deep learning has gained great success in resolving key computer vision challenges. Accurate segmentation of the iris region in the eye image plays a vital role in efficient performance of iris recognition systems, as one of the most reliable systems used for biometric identification. In this chapter, as the first contribution, we consider the application of Fully Convolutional Encoder–Decoder Networks (FCEDNs) for iris segmentation. To this extent, we utilize three types of FCEDN architectures for segmentation of the iris in the images, obtained from five different datasets, acquired under different scenarios. Subsequently, we conduct performance analysis, evaluation, and comparison of these three networks for iris segmentation. Furthermore, and as the second contribution, in order to subsidize the true evaluation of the proposed networks, we apply a selection of conventional (non-CNN) iris segmentation algorithms on the same datasets, and similarly evaluate their performances. The results then get compared against those obtained from the FCEDNs. Based on the results, the proposed networks achieve superior performance over all other algorithms, on all datasets.",
"Abstract In this paper, we present the evolution of the open source iris recognition system OSIRIS through its more relevant versions: OSIRISV2, OSIRISV4, and OSIRISV4.1. We developed OSIRIS in the framework of BioSecure Association as an open source software aiming at providing a reference for the scientific community. The software is mainly composed of four key modules, namely segmentation, normalization, feature extraction and template matching, which are described in detail for each version. A novel approach for iris normalization, based on a non geometric parameterization of contours is proposed in the latest version: OSIRISV4.1 and is detailed in particular here. Improvements in performance through the different versions of OSIRIS are reported on two public databases commonly used, ICE2005 and CASIA-IrisV4-Thousand. We note the high verification rates obtained by the last version. For this reason, OSIRISV4.1 can be proposed as a baseline system for comparison to other algorithms, this way supplying a helpful research tool for the iris recognition community."
]
} |
1901.01574 | 2949199146 | This work systematically analyzes the smoothing effect of vocabulary reduction for phrase translation models. We extensively compare various word-level vocabularies to show that the performance of smoothing is not significantly affected by the choice of vocabulary. This result provides empirical evidence that the standard phrase translation model is extremely sparse. Our experiments also reveal that vocabulary reduction is more effective for smoothing large-scale phrase tables. | Other types of features are also trained on word-level labels, e.g. hierarchical reordering features @cite_3 , an @math -gram-based translation model @cite_4 , and sparse word pair features @cite_5 . The first and the third are trained with a large-scale discriminative training algorithm. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_3"
],
"mid": [
"2528305545",
"2251369387",
""
],
"abstract": [
"This paper describes the submission of the University of Edinburgh and the Johns Hopkins University for the shared translation task of the EMNLP 2015 Tenth Workshop on Statistical Machine Translation (WMT 2015). We set up phrase-based statistical machine translation systems for all ten language pairs of this year’s evaluation campaign, which are English paired with Czech, Finnish, French, German, and Russian in both translation directions. Novel research directions we investigated include: neural network language models and bilingual neural network language models, a comprehensive use of word classes, and sparse lexicalized reordering features.",
"We investigate the use of generalized representations (POS, morphological analysis and word clusters) in phrase-based models and the N-gram-based Operation Sequence Model (OSM). Our integration enables these models to learn richer lexical and reordering patterns, consider wider contextual information and generalize better in sparse data conditions. When interpolating generalized OSM models on the standard IWSLT and WMT tasks we observed improvements of up to +1.35 on the English-to-German task and +0.63 for the German-to-English task. Using automatically generated word classes in standard phrase-based models and the OSM models yields an average improvement of +0.80 across 8 language pairs on the IWSLT shared task.",
""
]
} |
1901.01466 | 2949875363 | Statistical spoken dialogue systems usually rely on a single- or multi-domain dialogue model that is restricted in its capabilities of modelling complex dialogue structures, e.g., relations. In this work, we propose a novel dialogue model that is centred around entities and is able to model relations as well as multiple entities of the same type. We demonstrate in a prototype implementation benefits of relation modelling on the dialogue level and show that a trained policy using these relations outperforms the multi-domain baseline. Furthermore, we show that by modelling the relations on the dialogue level, the system is capable of processing relations present in the user input and even learns to address them in the system response. | The proposed CEDM achieves this by modelling state information about conversational entities instead of domains. More precisely, it models separate states about the objects (e.g., the hotel or restaurant) and the relations. Previous work on dialogue modelling already incorporated the idea of objects or entities to be the principal component of the dialogue state @cite_12 @cite_13 @cite_16 @cite_22 @cite_10 . However, these dialogue models are not based on statistical dialogue processing where a probability distribution over all dialogue states needs to be modelled and maintained. This additional complexity, though, cannot be incorporated in a straight-forward way into the proposed models. In contrast, the CEDM offers a comprehensive and consistent way of modelling these probabilities by defining and maintaining entity-based states. Work on statistical dialogue state modelling @cite_30 @cite_21 @cite_19 also contain a variant of objects but is still based on the MDDM thus not offering any mechanism to model multiple entities or relations between objects. proposed a belief tracking approach using relational trees. However, they only consider static relations present in the ontology and are not able to handle dynamic relations. | {
"cite_N": [
"@cite_13",
"@cite_30",
"@cite_22",
"@cite_21",
"@cite_19",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"2115101920",
"1879317218",
"2564070522",
"2624448691",
"1550825643",
"2497525228",
"1555547974"
],
"abstract": [
"",
"This paper explains how Partially Observable Markov Decision Processes (POMDPs) can provide a principled mathematical framework for modelling the inherent uncertainty in spoken dialogue systems. It briefly summarises the basic mathematics and explains why exact optimisation is intractable. It then describes in some detail a form of approximation called the Hidden Information State model which does scale and which can be used to build practical systems. A prototype HIS system for the tourist information domain is evaluated and compared with a baseline MDP system using both user simulations and a live user trial. The results give strong support to the central contention that the POMDP-based framework is both a tractable and powerful approach to building more robust spoken dialogue systems.",
"This paper introduces a new dialogue management framework for goal-directed conversations. A declarative specification defines the domain-specific elements and guides the dialogue manager, which communicates with the knowledge sources to complete the specified goal. The user is viewed as another knowledge source. The dialogue manager finds the next action by a mixture of rule-based reasoning and a simple statistical model. Implementation in the flight-reservation domain demonstrates that the framework enables the developer to easily build a conversational dialogue system.",
"",
"Recently, resources and tasks were proposed to go beyond state tracking in dialogue systems. An example is the frame tracking task, which requires recording multiple frames, one for each user goal set during the dialogue. This allows a user, for instance, to compare items corresponding to different goals. This paper proposes a model which takes as input the list of frames created so far during the dialogue, the current user utterance as well as the dialogue acts, slot types, and slot values associated with this utterance. The model then outputs the frame being referenced by each triple of dialogue act, slot type, and slot value. We show that on the recently published Frames dataset, this model significantly outperforms a previously proposed rule-based baseline. In addition, we propose an extensive analysis of the frame tracking task by dividing it into sub-tasks and assessing their difficulty with respect to our model.",
"In this paper we present a plug and play dialogue system for smart environments. The environment description and its state are stored on a domain ontology. This ontology is formed by entities that represent real world contextual information and abstract concepts. This information is complemented with linguistic parts that allow to automatically create a spoken interface for the environment. The spoken interface is based on multiple dialogues, related to every ontology entity with linguistic information. Firstly, the dialogue system creates appropriate grammars for the dialogues. Secondly, it creates the dialogue parts, employing a tree structure. Grammars support the recognition process and the dialogue tree supports the interpretation and generation processes. The system is being tested with a prototype formed by a living room. Users may interact with and modify the physical state of this living room environment by means of the spoken dialogue interface.",
"Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation ofa model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders thatpotentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. Thismonograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive systems.",
"Abstract : This report develops a representation of focus of attention thatcircumscribes discourse contexts within a general representation ofknowledge. Focus of attention is essential to any comprehension processbecause what and how a person understands is strongly influenced bywhere his attention is directed at a given moment. To formalize thenotion of focus, the need for and the use of focus mechanisms areconsidered from the standpoint of building a computer system that canparticipate in a natural language dialogue with a ser, Two ranges offocus, global and immediate, are investigated, and representations forincorporating them in a computer system are developed.The global focus in which an utterance is interpreted is determinedby the total discourse and situational setting of the utterance. Itinfluences what is talked about, how different concepts are introduced,and how concepts are referenced. To encode global focuscomputationally, a representation is developed that highlights thoseitems that are relevant at a given place in a dialogue. The underlyingknowledge representation is segmented into subunits, called focusspaces, that contain those items that are in the focus of attention of adialogue participant during a particular part of the dialogue.Mechanisms are required for updating the focus representation,because, as a dialogue progresses, the objects and actions that arerelevant to the conversation, and therefore in the participants' focusof attention, change. Procedures are described for deciding when andhow to shift focus in task-oriented dialogues, i.e., in dialogues inwhich the participants are cooperating in a shared task. Theseprocedures are guided by a representation of the task being performed.The ability to represent focus of attention in a languageunderstanding system results in a new approach to an important problemin discourse comprehension -- the identification of the referents ofdefinite noun phrases."
]
} |
1901.01544 | 2906731962 | It is inevitable to train large deep learning models on a large-scale cluster equipped with accelerators system. Deep gradient compression would highly increase the bandwidth utilization and speed up the training process but hard to implement on ring structure. In this paper, we find that redundant gradient and gradient staleness has negative effect on training. We have observed that in different epoch and different steps, the neural networks focus on updating different layers and different parameters. In order to save more communication bandwidth and preserve the accuracy on ring structure, which break the restrict as the node increase, we propose a new algorithm to measure the importance of gradients on large-scale cluster implementing ring all-reduce based on the size of the ratio of parameter calculation gradient to parameter value. Our importance weighted pruning approach achieved 64X and 58.8X of gradient compression ratio on AlexNet and ResNet50 on ImageNet. Meanwhile, in order to maintain the sparseness of the gradient propagation, we randomly broadcast the index of important gradients on each node. While the remaining nodes are ready for the index gradient and perform all-reduce update. This would speed up the convergence of the model and preserve the training accuracy. | Researchers have proposed many approaches to accelerate distributed deep learning training process. Distributed synchronous Stochastic Gradient Descent (SSGD) is commonly adopted solution for parallelize the tasks across machines. Message exchange algorithm would also speed up training process by make full use of the communication resources. @cite_2 utilizing the characteristics of deep learning training process, using different learning rate for different layers based on the norm of the weights( @math ) and the norm of the gradients( @math ) for large batch training. The ratio of weights and gradients @math varies significantly for different layers. But it is get a equal learning rate on the same layer and it doesn’t distinguish the different learning rate for the same layers because the ratio of weights and gradients ( @math ) varies significantly for same layers. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2749988060"
],
"abstract": [
"The most natural way to speed-up the training of large networks is to use data-parallelism on multiple GPUs. To scale Stochastic Gradient (SG) based methods to more processors, one need to increase the batch size to make full use of the computational power of each GPU. However, keeping the accuracy of network with increase of batch size is not trivial. Currently, the state-of-the art method is to increase Learning Rate (LR) proportional to the batch size, and use special learning rate with \"warm-up\" policy to overcome initial optimization difficulty. By controlling the LR during the training process, one can efficiently use large-batch in ImageNet training. For example, Batch-1024 for AlexNet and Batch-8192 for ResNet-50 are successful applications. However, for ImageNet-1k training, state-of-the-art AlexNet only scales the batch size to 1024 and ResNet50 only scales it to 8192. The reason is that we can not scale the learning rate to a large value. To enable large-batch training to general networks or datasets, we propose Layer-wise Adaptive Rate Scaling (LARS). LARS LR uses different LRs for different layers based on the norm of the weights and the norm of the gradients. By using LARS algoirithm, we can scale the batch size to 32768 for ResNet50 and 8192 for AlexNet. Large batch can make full use of the system's computational power. For example, batch-4096 can achieve 3x speedup over batch-512 for ImageNet training by AlexNet model on a DGX-1 station (8 P100 GPUs)."
]
} |
1901.01711 | 2906895564 | Matrix completion focuses on recovering a matrix from a small subset of its observed elements, and has already gained cumulative attention in computer vision. Many previous approaches formulate this issue as a low-rank matrix approximation problem. Recently, a truncated nuclear norm has been presented as a surrogate of traditional nuclear norm, for better estimation to the rank of a matrix. The truncated nuclear norm regularization (TNNR) method is applicable in real-world scenarios. However, it is sensitive to the selection of the number of truncated singular values and requires numerous iterations to converge. Hereby, this paper proposes a revised approach called the double weighted truncated nuclear norm regularization (DW-TNNR), which assigns different weights to the rows and columns of a matrix separately, to accelerate the convergence with acceptable performance. The DW-TNNR is more robust to the number of truncated singular values than the TNNR. Instead of the iterative updating scheme in the second step of TNNR, this paper devises an efficient strategy that uses a gradient descent manner in a concise form, with a theoretical guarantee in optimization. Sufficient experiments conducted on real visual data prove that DW-TNNR has promising performance and holds the superiority in both speed and accuracy for matrix completion. | Instead of standard nuclear norm, some researches devise the variants of nuclear norm to improve its performance. Oh @cite_20 proposed the partial sum minimization of singular values (PSSV) method, to replace the traditional nuclear norm in RPCA. It implicitly expected a soft constraint of the target rank. The objective of the PSSV is given by where @math is the target rank of @math , and @math is the @math norm. However, @math entails to be determined before optimization and @math is different based on various scenarios. Moreover, this objective is highly non-convex. | {
"cite_N": [
"@cite_20"
],
"mid": [
"1482071412"
],
"abstract": [
"Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values, which implicitly encourages the target rank constraint. Our experimental analyses show that, when the number of samples is deficient, our approach leads to a higher success rate than conventional rank minimization, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g., high dynamic range imaging, motion edge detection, photometric stereo, image alignment and recovery, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method."
]
} |
1901.01711 | 2906895564 | Matrix completion focuses on recovering a matrix from a small subset of its observed elements, and has already gained cumulative attention in computer vision. Many previous approaches formulate this issue as a low-rank matrix approximation problem. Recently, a truncated nuclear norm has been presented as a surrogate of traditional nuclear norm, for better estimation to the rank of a matrix. The truncated nuclear norm regularization (TNNR) method is applicable in real-world scenarios. However, it is sensitive to the selection of the number of truncated singular values and requires numerous iterations to converge. Hereby, this paper proposes a revised approach called the double weighted truncated nuclear norm regularization (DW-TNNR), which assigns different weights to the rows and columns of a matrix separately, to accelerate the convergence with acceptable performance. The DW-TNNR is more robust to the number of truncated singular values than the TNNR. Instead of the iterative updating scheme in the second step of TNNR, this paper devises an efficient strategy that uses a gradient descent manner in a concise form, with a theoretical guarantee in optimization. Sufficient experiments conducted on real visual data prove that DW-TNNR has promising performance and holds the superiority in both speed and accuracy for matrix completion. | Similarly, the joint Schatten- @math norm and @math norm @cite_23 was used to substitute the rank function and enhance the robustness to outliers. The @math norm of a vector @math is defined as @math . The definition of Schatten- @math norm ( @math ) of a matrix @math is where @math is the @math -th singular value of @math . Although the Schatten- @math norm can approximate the nuclear norm ( @math ) and the rank ( @math ) respectively, the objective function of which is non-convex. It takes the alternating direction method for optimization, which is not efficient enough. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2084983808"
],
"abstract": [
"The low-rank matrix completion problem is a fundamental machine learning and data mining problem with many important applications. The standard low-rank matrix completion methods relax the rank minimization problem by the trace norm minimization. However, this relaxation may make the solution seriously deviate from the original solution. Meanwhile, most completion methods minimize the squared prediction errors on the observed entries, which is sensitive to outliers. In this paper, we propose a new robust matrix completion method to address these two problems. The joint Schatten @math p -norm and @math l p -norm are used to better approximate the rank minimization problem and enhance the robustness to outliers. The extensive experiments are performed on both synthetic data and real-world applications in collaborative filtering prediction and social network link recovery. All empirical results show that our new method outperforms the standard matrix completion methods."
]
} |
1901.01711 | 2906895564 | Matrix completion focuses on recovering a matrix from a small subset of its observed elements, and has already gained cumulative attention in computer vision. Many previous approaches formulate this issue as a low-rank matrix approximation problem. Recently, a truncated nuclear norm has been presented as a surrogate of traditional nuclear norm, for better estimation to the rank of a matrix. The truncated nuclear norm regularization (TNNR) method is applicable in real-world scenarios. However, it is sensitive to the selection of the number of truncated singular values and requires numerous iterations to converge. Hereby, this paper proposes a revised approach called the double weighted truncated nuclear norm regularization (DW-TNNR), which assigns different weights to the rows and columns of a matrix separately, to accelerate the convergence with acceptable performance. The DW-TNNR is more robust to the number of truncated singular values than the TNNR. Instead of the iterative updating scheme in the second step of TNNR, this paper devises an efficient strategy that uses a gradient descent manner in a concise form, with a theoretical guarantee in optimization. Sufficient experiments conducted on real visual data prove that DW-TNNR has promising performance and holds the superiority in both speed and accuracy for matrix completion. | The TNNR was first presented by Hu @cite_15 , which approximated to the rank function better than the nuclear norm. In contrast to treating all singular values together, the TNNR leaves out the largest @math singular values, and tries to minimize the smallest @math singular values, where @math , @math are the dimensions of data, and @math is the number of truncated singular values. In advance, let us define where @math is the set of locations in @math with respect to the known elements and @math is the counterpart with respect to the missing elements. It indicates that @math is the complement of @math . Hence, the TNNR minimization model is formulated as where @math , @math , @math denotes the nuclear norm (sum of all singular values), and @math denotes the trace function. It can be solved by a two-step iterative fashion. Let @math as initialization. In Step 1 of the @math -th iteration, assume @math is the singular value decomposition (SVD) of @math , where @math and @math are the left and right orthogonal matrices. Thus, @math and @math are calculated directly through | {
"cite_N": [
"@cite_15"
],
"mid": [
"1969698720"
],
"abstract": [
"Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets."
]
} |
1901.01711 | 2906895564 | Matrix completion focuses on recovering a matrix from a small subset of its observed elements, and has already gained cumulative attention in computer vision. Many previous approaches formulate this issue as a low-rank matrix approximation problem. Recently, a truncated nuclear norm has been presented as a surrogate of traditional nuclear norm, for better estimation to the rank of a matrix. The truncated nuclear norm regularization (TNNR) method is applicable in real-world scenarios. However, it is sensitive to the selection of the number of truncated singular values and requires numerous iterations to converge. Hereby, this paper proposes a revised approach called the double weighted truncated nuclear norm regularization (DW-TNNR), which assigns different weights to the rows and columns of a matrix separately, to accelerate the convergence with acceptable performance. The DW-TNNR is more robust to the number of truncated singular values than the TNNR. Instead of the iterative updating scheme in the second step of TNNR, this paper devises an efficient strategy that uses a gradient descent manner in a concise form, with a theoretical guarantee in optimization. Sufficient experiments conducted on real visual data prove that DW-TNNR has promising performance and holds the superiority in both speed and accuracy for matrix completion. | In Step 2, @math is obtained by solving the following sub-problem: Two typical optimization approaches were developed in @cite_15 to minimize , i.e., the alternating direction method of multipliers and the accelerated proximal gradient line search method. However, both of them require numerous iterations to converge and obtain sub-optimal results in Step 2. The TNNR algorithm alternately executes the two steps above. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1969698720"
],
"abstract": [
"Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets."
]
} |
1901.01711 | 2906895564 | Matrix completion focuses on recovering a matrix from a small subset of its observed elements, and has already gained cumulative attention in computer vision. Many previous approaches formulate this issue as a low-rank matrix approximation problem. Recently, a truncated nuclear norm has been presented as a surrogate of traditional nuclear norm, for better estimation to the rank of a matrix. The truncated nuclear norm regularization (TNNR) method is applicable in real-world scenarios. However, it is sensitive to the selection of the number of truncated singular values and requires numerous iterations to converge. Hereby, this paper proposes a revised approach called the double weighted truncated nuclear norm regularization (DW-TNNR), which assigns different weights to the rows and columns of a matrix separately, to accelerate the convergence with acceptable performance. The DW-TNNR is more robust to the number of truncated singular values than the TNNR. Instead of the iterative updating scheme in the second step of TNNR, this paper devises an efficient strategy that uses a gradient descent manner in a concise form, with a theoretical guarantee in optimization. Sufficient experiments conducted on real visual data prove that DW-TNNR has promising performance and holds the superiority in both speed and accuracy for matrix completion. | Recently, a variety of studies were derived from the TNNR. Hong @cite_4 combined the truncated nuclear norm with the online RPCA @cite_25 , to promote low dimensional subspace estimation. Motion capture data completion @cite_21 demonstrated the validity via integrating it with the truncated nuclear norm. Large scale multi-class classification which uses the TNNR and multinomial logistical loss was suggested in @cite_24 . Lee and Lam @cite_2 applied the truncated nuclear norm heuristic to ghost-free high dynamic range imaging by searching the low-rank structure of irradiance maps. Cao @cite_22 extended the TNNR to the low-rank and sparse decomposition problem, and applied it for foreground object detection. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_21",
"@cite_24",
"@cite_2",
"@cite_25"
],
"mid": [
"",
"2529441728",
"2600509566",
"2043577076",
"2462232592",
"2154992274"
],
"abstract": [
"",
"Recovering the low-rank, sparse components of a given matrix is a challenging problem that arises in many real applications. Existing traditional approaches aimed at solving this problem are usually recast as a general approximation problem of a low-rank matrix. These approaches are based on the nuclear norm of the matrix, and thus in practice the rank may not be well approximated. This paper presents a new approach to solve this problem that is based on a new norm of a matrix, called the truncated nuclear norm (TNN). An efficient iterative scheme developed under the linearized alternating direction method multiple framework is proposed, where two novel iterative algorithms are designed to recover the sparse and low-rank components of matrix. More importantly, the convergence of the linearized alternating direction method multiple on our matrix recovering model is discussed and proved mathematically. To validate the effectiveness of the proposed methods, a series of comparative trials are performed on a variety of synthetic data sets. More specifically, the new methods are used to deal with problems associated with background subtraction (foreground object detection), and removing shadows and peculiarities from images of faces. Our experimental results illustrate that our new frameworks are more effective and accurate when compared with other methods.",
"The objective of motion capture (mocap) data completion is to recover missing measurement of the body markers from mocap. It becomes increasingly challenging as the missing ratio and duration of mocap data grow. Traditional approaches usually recast this problem as a low-rank matrix approximation problem based on the nuclear norm. However, the nuclear norm defined as the sum of all the singular values of a matrix is not a good approximation to the rank of mocap data. This paper proposes a novel approach to solve mocap data completion problem by adopting a new matrix norm, called truncated nuclear norm. An efficient iterative algorithm is designed to solve this problem based on the augmented Lagrange multiplier. The convergence of the proposed method is proved mathematically under mild conditions. To demonstrate the effectiveness of the proposed method, various comparative experiments are performed on synthetic data and mocap data. Compared to other methods, the proposed method is more efficient and accurate.",
"Abstract In this paper, we consider the problem of multi-class image classification when the classes behaviour has a low rank structure. That is, classes can be embedded into a low dimensional space. Traditional multi-class classification algorithms usually use nuclear norm to approximate the rank of the weight matrix. Considering the limited ability of the nuclear norm for the accurate approximation, we propose a new scalable large scale multi-class classification algorithm by using the recently proposed truncated nuclear norm as a better surrogate of the rank operator of matrices along with multinomial logisitic loss. To solve the non-convex and non-smooth optimization problem, we further develop an efficient iterative procedure. In each iteration, by lifting the non-smooth convex subproblem into an infinite dimensional l 1 norm regularized problem, a simple and efficient accelerated coordinate descent algorithm is applied to find the optimal solution. We conduct a series of evaluations on several public large scale image datasets, where the experimental results show the encouraging improvement of classification accuracy of the proposed algorithm in comparison with the state-of-the-art multi-class classification algorithms.",
"Matrix completion is a rank minimization problem to recover a low-rank data matrix from a small subset of its entries. Since the matrix rank is nonconvex and discrete, many existing approaches approximate the matrix rank as the nuclear norm. However, the truncated nuclear norm is known to be a better approximation to the matrix rank than the nuclear norm, exploiting a priori target rank information about the problem in rank minimization. In this paper, we propose a computationally efficient truncated nuclear norm minimization algorithm for matrix completion, which we call TNNM-ALM. We reformulate the original optimization problem by introducing slack variables and considering noise in the observation. The central contribution of this paper is to solve it efficiently via the augmented Lagrange multiplier (ALM) method, where the optimization variables are updated by closed-form solutions. We apply the proposed TNNM-ALM algorithm to ghost-free high dynamic range imaging by exploiting the low-rank structure of irradiance maps from low dynamic range images. Experimental results on both synthetic and real visual data show that the proposed algorithm achieves significantly lower reconstruction errors and superior robustness against noise than the conventional approaches, while providing substantial improvement in speed, thereby applicable to a wide range of imaging applications.",
"Robust PCA methods are typically based on batch optimization and have to load all the samples into memory during optimization. This prevents them from efficiently processing big data. In this paper, we develop an Online Robust PCA (OR-PCA) that processes one sample per time instance and hence its memory cost is independent of the number of samples, significantly enhancing the computation and storage efficiency. The proposed OR-PCA is based on stochastic optimization of an equivalent reformulation of the batch RPCA. Indeed, we show that OR-PCA provides a sequence of subspace estimations converging to the optimum of its batch counterpart and hence is provably robust to sparse corruption. Moreover, OR-PCA can naturally be applied for tracking dynamic subspace. Comprehensive simulations on subspace recovering and tracking demonstrate the robustness and efficiency advantages of the OR-PCA over online PCA and batch RPCA methods."
]
} |
1901.01517 | 2908001447 | The effectiveness of many optimal network control algorithms (e.g., BackPressure) relies on the premise that all of the nodes are fully controllable. However, these algorithms may yield poor performance in a partially-controllable network where a subset of nodes are uncontrollable and use some unknown policy. Such a partially-controllable model is of increasing importance in real-world networked systems such as overlay-underlay networks. In this paper, we design optimal network control algorithms that can stabilize a partially-controllable network. We first study the scenario where uncontrollable nodes use a queue-agnostic policy, and propose a low-complexity throughput-optimal algorithm, called Tracking-MaxWeight (TMW), which enhances the original MaxWeight algorithm with an explicit learning of the policy used by uncontrollable nodes. Next, we investigate the scenario where uncontrollable nodes use a queue-dependent policy and the problem is formulated as an MDP with unknown queueing dynamics. We propose a new reinforcement learning algorithm, called Truncated Upper Confidence Reinforcement Learning (TUCRL), and prove that TUCRL achieves tunable three-way tradeoffs between throughput, delay and convergence rate. | In terms of technical tools, our work leverages techniques from reinforcement learning, since a partially-controllable network with queue-dependent uncontrollable policy can be formulated as an MDP with unknown dynamics. Over the past few decades, many reinforcement learning algorithms have been developed, such as Q-learning @cite_12 , actor critic @cite_23 and policy gradient @cite_19 . Recently, the successful applications of deep neural networks in reinforcement learning algorithms have produced many deep reinforcement learning algorithms such as DQN @cite_4 , DDPG @cite_8 and TRPO @cite_9 . However, most of these methods are heuristic-based and do not have any performance guarantees. Among the existing reinforcement learning algorithms there are a few that do provide good performance bounds, such as model-based reinforcement learning algorithms UCRL @cite_26 @cite_1 and PSRL @cite_11 @cite_18 . Unfortunately, these algorithms require that the size of the state space be relatively small, which cannot be directly applied in our context since the state space (i.e., queue length space) contains countably-infinite states. In this work, we combine the UCRL algorithm with a queue truncation technique and propose the Truncated Upper Confidence Reinforcement Learning (TUCRL) algorithm that has good performance guarantees even with countably-infinite queue length space. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_19",
"@cite_23",
"@cite_12",
"@cite_11"
],
"mid": [
"2111764152",
"2097931172",
"1757796397",
"2173248099",
"1771410628",
"1850488217",
"",
"",
"",
"2964000194"
],
"abstract": [
"Most provably-efficient reinforcement learning algorithms introduce optimism about poorly-understood states and actions to encourage exploration. We study an alternative approach for efficient exploration: posterior sampling for reinforcement learning (PSRL). This algorithm proceeds in repeated episodes of known duration. At the start of each episode, PSRL updates a prior distribution over Markov decision processes and takes one sample from this posterior. PSRL then follows the policy that is optimal for this sample during the episode. The algorithm is conceptually simple, computationally efficient and allows an agent to encode prior knowledge in a natural way. We establish an O(τS √AT) bound on expected regret, where T is time, τ is the episode length and S and A are the cardinalities of the state and action spaces. This bound is one of the first for an algorithm not based on optimism, and close to the state of the art for any reinforcement learning algorithm. We show through simulation that PSRL significantly outperforms existing algorithms with similar regret bounds.",
"We present a learning algorithm for undiscounted reinforcement learning. Our interest lies in bounds for the algorithm's online performance after some finite number of steps. In the spirit of similar methods already successfully applied for the exploration-exploitation tradeoff in multi-armed bandit problems, we use upper confidence bounds to show that our UCRL algorithm achieves logarithmic online regret in the number of steps taken with respect to an optimal policy.",
"We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.",
"For undiscounted reinforcement learning in Markov decision processes (MDPs) we consider the total regret of a learning algorithm with respect to an optimal policy. In order to describe the transition structure of an MDP we propose a new parameter: An MDP has diameter D if for any pair of states s,s' there is a policy which moves from s to s' in at most D steps (on average). We present a reinforcement learning algorithm with total regret O(DS√AT) after T steps for any unknown MDP with S states, A actions per state, and diameter D. A corresponding lower bound of Ω(√DSAT) on the total regret of any learning algorithm is given as well. These results are complemented by a sample complexity bound on the number of suboptimal steps taken by our algorithm. This bound can be used to achieve a (gap-dependent) regret bound that is logarithmic in T. Finally, we also consider a setting where the MDP is allowed to change a fixed number of l times. We present a modification of our algorithm that is able to deal with this setting and show a regret bound of O(l1 3T2 3DS√A).",
"",
"",
"",
"We consider the problem of learning an unknown Markov Decision Process (MDP) that is weakly communicating in the infinite horizon setting. We propose a Thompson Sampling-based reinforcement learning algorithm with dynamic episodes (TSDE). At the beginning of each episode, the algorithm generates a sample from the posterior distribution over the unknown model parameters. It then follows the optimal stationary policy for the sampled model for the rest of the episode. The duration of each episode is dynamically determined by two stopping criteria. The first stopping criterion controls the growth rate of episode length. The second stopping criterion happens when the number of visits to any state-action pair is doubled. We establish @math bounds on expected regret under a Bayesian setting, where @math and @math are the sizes of the state and action spaces, @math is time, and @math is the bound of the span. This regret bound matches the best available bound for weakly communicating MDPs. Numerical results show it to perform better than existing algorithms for infinite horizon MDPs."
]
} |
1901.01577 | 2951502268 | We address for the first time unsupervised training for a translation task with hundreds of thousands of vocabulary words. We scale up the expectation-maximization (EM) algorithm to learn a large translation table without any parallel text or seed lexicon. First, we solve the memory bottleneck and enforce the sparsity with a simple thresholding scheme for the lexicon. Second, we initialize the lexicon training with word classes, which efficiently boosts the performance. Our methods produced promising results on two large-scale unsupervised translation tasks. | Early work on unsupervised sequence learning was mainly for , a combinatorial problem of matching input-output symbols with 1:1 or homophonic assumption @cite_10 @cite_4 @cite_15 . relaxes this assumption to allow many-to-many mapping, while the vocabulary is usually limited to a few thousand types @cite_11 @cite_1 @cite_13 @cite_20 . | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_1",
"@cite_15",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2164746297",
"2135391077",
"2102028293",
"2172267041",
"2136346830",
"2250600644",
"2169360026"
],
"abstract": [
"This paper addresses the problem of EMbased decipherment for large vocabularies. Here, decipherment is essentially a tagging problem: Every cipher token is tagged with some plaintext type. As with other tagging problems, this one can be treated as a Hidden Markov Model (HMM), only here, the vocabularies are large, so the usualO(NV 2 ) exact EM approach is infeasible. When faced with this situation, many people turn to sampling. However, we propose to use a type of approximate EM and show that it works well. The basic idea is to collect fractional counts only over a small subset of links in the forward-backward lattice. The subset is different for each iteration of EM. One option is to use beam search to do the subsetting. The second method restricts the successor words that are looked at, for each hypothesis. It does this by consulting pre-computed tables of likely n-grams and likely substitutions.",
"We introduce a novel Bayesian approach for deciphering complex substitution ciphers. Our method uses a decipherment model which combines information from letter n-gram language models as well as word dictionaries. Bayesian inference is performed on our model using an efficient sampling technique. We evaluate the quality of the Bayesian decipherment output on simple and homophonic letter substitution ciphers and show that unlike a previous approach, our method consistently produces almost 100 accurate decipherments. The new method can be applied on more complex substitution ciphers and we demonstrate its utility by cracking the famous Zodiac-408 cipher in a fully automated fashion, which has never been done before.",
"In this paper, we propose a new Bayesian inference method to train statistical machine translation systems using only nonparallel corpora. Following a probabilistic decipherment approach, we first introduce a new framework for decipherment training that is flexible enough to incorporate any number type of features (besides simple bag-of-words) as side-information used for estimating translation models. In order to perform fast, efficient Bayesian inference in this framework, we then derive a hash sampling strategy that is inspired by the work of (2012). The new translation hash sampler enables us to scale elegantly to complex models (for the first time) and large vocabulary corpora sizes. We show empirical results on the OPUS data—our method yields the best BLEU scores compared to existing approaches, while achieving significant computational speedups (several orders faster). We also report for the first time—BLEU score results for a largescale MT task using only non-parallel data (EMEA corpus).",
"In this paper we address the problem of solving substitution ciphers using a beam search approach. We present a conceptually consistent and easy to implement method that improves the current state of the art for decipherment of substitution ciphers and is able to use high ordern-gram language models. We show experiments with 1:1 substitution ciphers in which the guaranteed optimal solution for 3-gram language models has 38.6 decipherment error, while our approach achieves 4.13 decipherment error in a fraction of time by using a 6-gram language model. We also apply our approach to the famous Zodiac-408 cipher and obtain slightly better (and near to optimal) results than previously published. Unlike the previous state-of-the-art approach that uses additional word lists to evaluate possible decipherments, our approach only uses a letterbased 6-gram language model. Furthermore we use our algorithm to solve large vocabulary substitution ciphers and improve the best published decipherment error rate based on the Gigaword corpus of 7.8 to 6.0 error rate.",
"We study a number of natural language decipherment problems using unsupervised learning. These include letter substitution ciphers, character code conversion, phonetic decipherment, and word-based ciphers with relevance to machine translation. Straightforward unsupervised learning techniques most often fail on the first try, so we describe techniques for understanding errors and significantly increasing performance.",
"We introduce into Bayesian decipherment a base distribution derived from similarities of word embeddings. We use Dirichlet multinomial regression (Mimno and McCallum, 2012) to learn a mapping between ciphertext and plaintext word embeddings from non-parallel data. Experimental results show that the base distribution is highly beneficial to decipherment, improving state-of-the-art decipherment accuracy from 45.8 to 67.4 for Spanish English, and from 5.1 to 11.2 for Malagasy English.",
"In this paper we show how to train statistical machine translation systems on real-life tasks using only non-parallel monolingual data from two languages. We present a modification of the method shown in (Ravi and Knight, 2011) that is scalable to vocabulary sizes of several thousand words. On the task shown in (Ravi and Knight, 2011) we obtain better results with only 5 of the computational effort when running our method with an n-gram language model. The efficiency improvement of our method allows us to run experiments with vocabulary sizes of around 5,000 words, such as a non-parallel version of the VERBMOBIL corpus. We also report results using data from the monolingual French and English GIGAWORD corpora."
]
} |
1901.01577 | 2951502268 | We address for the first time unsupervised training for a translation task with hundreds of thousands of vocabulary words. We scale up the expectation-maximization (EM) algorithm to learn a large translation table without any parallel text or seed lexicon. First, we solve the memory bottleneck and enforce the sparsity with a simple thresholding scheme for the lexicon. Second, we initialize the lexicon training with word classes, which efficiently boosts the performance. Our methods produced promising results on two large-scale unsupervised translation tasks. | There has been several attempts to improve the scalability of decipherment methods, which are however not applicable to 100k-vocabulary translation scenarios. For EM-based decipherment, and accelerate hypothesis expansions but do not explicitly solve the memory issue for a large lexicon table. Count-based Bayesian inference @cite_14 @cite_1 @cite_20 loses all context information beyond bigrams for the sake of efficiency; it is therefore particularly effective in contextless deterministic ciphers or in inducing an auxiliary lexicon for supervised SMT. uses binary hashing to quicken the Bayesian sampling procedure, which yet shows poor performance in large-scale experiments. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_20"
],
"mid": [
"2103042430",
"2102028293",
"2250600644"
],
"abstract": [
"We apply slice sampling to Bayesian decipherment and use our new decipherment framework to improve out-of-domain machine translation. Compared with the state of the art algorithm, our approach is highly scalable and produces better results, which allows us to decipher ciphertext with billions of tokens and hundreds of thousands of word types with high accuracy. We decipher a large amount of monolingual data to improve out-of-domain translation and achieve significant gains of up to 3.8 BLEU points.",
"In this paper, we propose a new Bayesian inference method to train statistical machine translation systems using only nonparallel corpora. Following a probabilistic decipherment approach, we first introduce a new framework for decipherment training that is flexible enough to incorporate any number type of features (besides simple bag-of-words) as side-information used for estimating translation models. In order to perform fast, efficient Bayesian inference in this framework, we then derive a hash sampling strategy that is inspired by the work of (2012). The new translation hash sampler enables us to scale elegantly to complex models (for the first time) and large vocabulary corpora sizes. We show empirical results on the OPUS data—our method yields the best BLEU scores compared to existing approaches, while achieving significant computational speedups (several orders faster). We also report for the first time—BLEU score results for a largescale MT task using only non-parallel data (EMEA corpus).",
"We introduce into Bayesian decipherment a base distribution derived from similarities of word embeddings. We use Dirichlet multinomial regression (Mimno and McCallum, 2012) to learn a mapping between ciphertext and plaintext word embeddings from non-parallel data. Experimental results show that the base distribution is highly beneficial to decipherment, improving state-of-the-art decipherment accuracy from 45.8 to 67.4 for Spanish English, and from 5.1 to 11.2 for Malagasy English."
]
} |
1901.01577 | 2951502268 | We address for the first time unsupervised training for a translation task with hundreds of thousands of vocabulary words. We scale up the expectation-maximization (EM) algorithm to learn a large translation table without any parallel text or seed lexicon. First, we solve the memory bottleneck and enforce the sparsity with a simple thresholding scheme for the lexicon. Second, we initialize the lexicon training with word classes, which efficiently boosts the performance. Our methods produced promising results on two large-scale unsupervised translation tasks. | Our problem is also related to with hidden Markov model (HMM). To the best of our knowledge, there is no published work on HMM training for a 100k-size discrete space. HMM taggers are often integrated with sparse priors @cite_16 @cite_5 , which is not readily possible in a large vocabulary setting due to the memory bottleneck. | {
"cite_N": [
"@cite_5",
"@cite_16"
],
"mid": [
"1570013475",
"2099873701"
],
"abstract": [
"This paper investigates why the HMMs estimated by Expectation-Maximization (EM) produce such poor results as Part-of-Speech (POS) taggers. We find that the HMMs estimated by EM generally assign a roughly equal number of word tokens to each hidden state, while the empirical distribution of tokens to POS tags is highly skewed. This motivates a Bayesian approach using a sparse prior to bias the estimator toward such a skewed distribution. We investigate Gibbs Sampling (GS) and Variational Bayes (VB) estimators and show that VB converges faster than GS for this task and that VB significantly improves 1-to-1 tagging accuracy over EM. We also show that EM does nearly as well as VB when the number of hidden HMM states is dramatically reduced. We also point out the high variance in all of these estimators, and that they require many more iterations to approach convergence than usually thought.",
"Unsupervised learning of linguistic structure is a difficult problem. A common approach is to define a generative model and maximize the probability of the hidden structure given the observed data. Typically, this is done using maximum-likelihood estimation (MLE) of the model parameters. We show using part-of-speech tagging that a fully Bayesian approach can greatly improve performance. Rather than estimating a single set of parameters, the Bayesian approach integrates over all possible parameter values. This difference ensures that the learned structure will have high probability over a range of possible parameters, and permits the use of priors favoring the sparse distributions that are typical of natural language. Our model has the structure of a standard trigram HMM, yet its accuracy is closer to that of a state-of-the-art discriminative model (Smith and Eisner, 2005), up to 14 percentage points better than MLE. We find improvements both when training from data alone, and using a tagging dictionary."
]
} |
1901.01538 | 2743442682 | Nowadays, web services play a major role in the development of enterprise applications. Many such applications are now developed using a service-oriented architecture (SOA), where microservices is one of its most popular kind. A RESTful web service will provide data via an API over the network using HTTP, possibly interacting with databases and other web services. Testing a RESTful API poses challenges, as inputs outputs are sequences of HTTP requests responses to a remote server. Many approaches in the literature do black-box testing, as the tested API is a remote service whose code is not available. In this paper, we consider testing from the point of view of the developers, which do have full access to the code that they are writing. Therefore, we propose a fully automated white-box testing approach, where test cases are automatically generated using an evolutionary algorithm. Tests are rewarded based on code coverage and fault finding metrics. We implemented our technique in a tool called EVOMASTER, which is open-source. Experiments on two open-source, yet non-trivial RESTful services and an industrial one, do show that our novel technique did automatically find 38 real bugs in those applications. However, obtained code coverage is lower than the one achieved by the manually written test suites already existing in those services. Research directions on how to further improve such approach are therefore discussed. | Canfora and Di Penta provided a discussion on the trends and challenges of SOA testing @cite_36 . Afterwards, they provided a more in detail survey @cite_2 . There are different kinds of testing for SOA (unit, integration, regression, robustness, etc.), which also depend on which stakeholders are involved, e.g., service developers, service providers, service integrators and third-party certifiers. Also @cite_10 discussed the trends and challenges in SOA validation and verification. | {
"cite_N": [
"@cite_36",
"@cite_10",
"@cite_2"
],
"mid": [
"2171264441",
"2489883494",
"2094545987"
],
"abstract": [
"This paper provides users and system integrators with an overview of service-oriented architecture (SOA) testing's fundamental technical issues and solutions, focusing on Web services as a practical implementation of the SOA model. The paper discusses SOA testing across two dimensions: testing perspectives, wherein various stakeholders have different needs and raise different testing requirements; and testing level, wherein each SOA testing level poses unique challenges",
"Service Oriented Architecture (SOA) is changing the way in which software applications are designed, deployed and maintained. A service-oriented application consists of the runtime composition of autonomous services that are typically owned and controlled by different organizations. This decentralization impacts on the dependability of applications that consist of dynamic services agglomerates, and challenges their validation. Different techniques can be used or combined for the verification of dependability aspects, spanning over traditional off-line testing approaches up till monitoring and on-line testing. In this chapter we discuss issues and opportunities of SOA validation, identify three different stages for validation along the service life-cycle model, and overview some proposed research approaches and tools. The emphasis is on on-line testing, which to us is the most peculiar stage in the SOA validation process. Finally, we claim that on-line testing is only possible within an agreed governance framework.",
"Testing of Service Oriented Architectures (SOA) plays a critical role in ensuring a successful deployment in any enterprise. SOA testing must span several levels, from individual services to inter-enterprise federations of systems, and must cover functional and non-functional aspects. SOA unique combination of features, such as run-time discovery of services, ultra-late binding, QoS aware composition, and SLA automated negotiation, challenge many existing testing techniques. As an example, run-time discovery and ultra-late binding entail that the actual configuration of a system is known only during the execution, and this makes many existing integration testing techniques inadequate. Similarly, QoS aware composition and SLA automated negotiation means that a service may deliver with different performances in different contexts, thus making most existing performance testing techniques to fail. Whilst SOA testing is a recent area of investigation, the literature presents a number of approaches and techniques that either extend traditional testing or develop novel ideas with the aim of addressing the specific problems of testing service-centric systems. This chapter reports a survey of recent research achievements related to SOA testing. Challenges are analyzed from the viewpoints of different stakeholders and solutions are presented for different levels of testing, including unit, integration, and regression testing. The chapter covers both functional and non-functional testing, and explores ways to improve the testability of SOA."
]
} |
1901.01538 | 2743442682 | Nowadays, web services play a major role in the development of enterprise applications. Many such applications are now developed using a service-oriented architecture (SOA), where microservices is one of its most popular kind. A RESTful web service will provide data via an API over the network using HTTP, possibly interacting with databases and other web services. Testing a RESTful API poses challenges, as inputs outputs are sequences of HTTP requests responses to a remote server. Many approaches in the literature do black-box testing, as the tested API is a remote service whose code is not available. In this paper, we consider testing from the point of view of the developers, which do have full access to the code that they are writing. Therefore, we propose a fully automated white-box testing approach, where test cases are automatically generated using an evolutionary algorithm. Tests are rewarded based on code coverage and fault finding metrics. We implemented our technique in a tool called EVOMASTER, which is open-source. Experiments on two open-source, yet non-trivial RESTful services and an industrial one, do show that our novel technique did automatically find 38 real bugs in those applications. However, obtained code coverage is lower than the one achieved by the manually written test suites already existing in those services. Research directions on how to further improve such approach are therefore discussed. | Successively, @cite_1 carried out a survey as well on SOA testing, in which 177 papers were analysed. One of the interesting results of this survey is that, although the number of papers on SOA testing has been increasing throughout the years, only 11 | {
"cite_N": [
"@cite_1"
],
"mid": [
"1577602361"
],
"abstract": [
"Service-oriented architecture (SOA) is gaining momentum as an emerging distributed system architecture for business-to-business collaborations. This momentum can be observed in both industry and academic research. SOA presents new challenges and opportunities for testing and verification, leading to an upsurge in research. This paper surveys the previous work undertaken on testing and verification of service-centric systems, which in total are 177 papers, showing the strengths and weaknesses of current strategies and testing tools and identifying issues for future work. Copyright © 2012 John Wiley & Sons, Ltd."
]
} |
1901.01538 | 2743442682 | Nowadays, web services play a major role in the development of enterprise applications. Many such applications are now developed using a service-oriented architecture (SOA), where microservices is one of its most popular kind. A RESTful web service will provide data via an API over the network using HTTP, possibly interacting with databases and other web services. Testing a RESTful API poses challenges, as inputs outputs are sequences of HTTP requests responses to a remote server. Many approaches in the literature do black-box testing, as the tested API is a remote service whose code is not available. In this paper, we consider testing from the point of view of the developers, which do have full access to the code that they are writing. Therefore, we propose a fully automated white-box testing approach, where test cases are automatically generated using an evolutionary algorithm. Tests are rewarded based on code coverage and fault finding metrics. We implemented our technique in a tool called EVOMASTER, which is open-source. Experiments on two open-source, yet non-trivial RESTful services and an industrial one, do show that our novel technique did automatically find 38 real bugs in those applications. However, obtained code coverage is lower than the one achieved by the manually written test suites already existing in those services. Research directions on how to further improve such approach are therefore discussed. | A lot of the work in the literature has been focusing on black-box testing of SOAP web services described with WSDL (Web Services Description Language). Different strategies have been proposed, like for example @cite_6 @cite_37 @cite_13 @cite_19 @cite_7 @cite_18 . If those services also provide a semantic model (e.g., in OWL-S format), that can be exploited to create more realistic'' test data @cite_25 . When in SOAs the service compositions are described with BPEL (Web Services Business Process Execution Language), different techniques can be used to generate tests for those compositions @cite_12 @cite_34 | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_7",
"@cite_6",
"@cite_19",
"@cite_34",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"2550065667",
"2117438772",
"",
"",
"2010425070",
"",
"2122186426",
"2086438339"
],
"abstract": [
"",
"Testing Web Services has become the spotlight of software engineering as an important means to assure the quality of Web application. Due to lacking of graphic interface and source code, Web services need an automated testing method, which is an important part in efficiently designing and generating test suite. However, the existing testing methods may lead to the redundancy of test suite and the decrease of fault-detecting ability since it cannot handle scenarios where the strengths of the different interactions are not uniform. With the purpose of solving this problem, firstly the formal tree model based on WSDL is constructed and the actual interaction relationship of each node is made sufficient consideration into, then the combinatorial testing is proposed to generate variable strength combinatorial test suite based on One-test-at-a-time strategy. At last test cases are minimized according to constraint rules. The results show that compared with conventional random testing, the proposed approach can detect more errors with the same amount of test cases which turning out to be more ideal than existing ones in size.",
"Web Services (WSs) are the W3C-endorsed realization of the Service-Oriented Architecture (SOA). Since they are supposed to be implementation-neutral, WSs are typically tested black-box at their interface. Such an interface is generally specified in an XML-based notation called the WS Description Language (WSDL). Conceptually, these WSDL documents are eligible for fully automated WS test generation using syntax-based testing approaches. Towards such goal, we introduce the WS-TAXI framework, in which we combine the coverage of WS operations with data-driven test generation. In this paper we present an early-stage implementation of WS-TAXI, obtained by the integration of two existing softwares: soapUI, a popular tool for WS testing, and TAXI, an application we have previously developed for the automated derivation of XML instances from a XML schema. WS-TAXI delivers a complete suite of test messages ready for execution. Test generation is driven by basic coverage criteria and by the application of some heuristics. The application of WS-TAXI to a real case study gave encouraging results.",
"",
"",
"Service oriented architectures (SOAs) allow for interesting and flexible software designs for complex problems. Such complex designs, however, require us to pay special attention to ensuring their quality. Extending earlier work on a testing and diagnosis concept for SOAs, we report in this paper on our first experience regarding an experimental setup with a test case generation algorithm based on random path selection, in contrast to our earlier work being based on considering all paths in a SOA's BPEL process description.",
"",
"Generating realistic test data is a major problem for software testers. Realistic test data generation for certain input types is hard to automate and therefore laborious. We propose a novel automated solution to test data generation that exploits existing web services as sources of realistic test data. Our approach is capable of generating realistic test data and also generating data based on tester-specified constraints. In experimental analysis, our prototype tool achieved between 93 and 100 success rates in generating realistic data using service compositions while random test data generation achieved only between 2 and 34 .",
"Testing is undisputedly a fundamental verification principle in the software landscape. Today's products require us to effectively handle and test huge, complex systems and in this context to tackle challenging traits like heterogeneity, distribution and controllability to name just a few. The advent of Service-Oriented Architectures with their inherent technological features like dynamics and heterogeneity exacerbated faced challenges, requiring us to evolve our technology. The traditional view of white or black box testing, for example, does not accommodate the multitude of shades of grey one should be able to exploit effectively for system-wide tests. Today, while there are a multitude of approaches for testing single services, there is still few work on methodological system tests for SOAs. In this paper we propose a corresponding workflow for tackling SOA testing and diagnosis, discuss SOA test case generation in more detail, and report preliminary research in that direction."
]
} |
1901.01538 | 2743442682 | Nowadays, web services play a major role in the development of enterprise applications. Many such applications are now developed using a service-oriented architecture (SOA), where microservices is one of its most popular kind. A RESTful web service will provide data via an API over the network using HTTP, possibly interacting with databases and other web services. Testing a RESTful API poses challenges, as inputs outputs are sequences of HTTP requests responses to a remote server. Many approaches in the literature do black-box testing, as the tested API is a remote service whose code is not available. In this paper, we consider testing from the point of view of the developers, which do have full access to the code that they are writing. Therefore, we propose a fully automated white-box testing approach, where test cases are automatically generated using an evolutionary algorithm. Tests are rewarded based on code coverage and fault finding metrics. We implemented our technique in a tool called EVOMASTER, which is open-source. Experiments on two open-source, yet non-trivial RESTful services and an industrial one, do show that our novel technique did automatically find 38 real bugs in those applications. However, obtained code coverage is lower than the one achieved by the manually written test suites already existing in those services. Research directions on how to further improve such approach are therefore discussed. | Black-box testing has its advantages, but also its limitations. Coverage measures could improve the generation of tests but, often, web services are remote and there is no access to their source code. For testing purposes, @cite_20 proposed an approach in which feedback on code coverage is provided as a service, without exposing the internal details of the tested web services. However, the insertion of the code coverage probes had to be done manually. A similar approach has been developed by Ye and Jacobsen @cite_26 . In our approach in this paper, we do provide as well code coverage as a service, but our approach is fully automated (e.g., based on on-the-fly bytecode manipulation). | {
"cite_N": [
"@cite_26",
"@cite_20"
],
"mid": [
"2107915643",
"2071045582"
],
"abstract": [
"Whitening the testing of service-oriented applications can provide service consumers confidence on how well an application has been tested. However, to protect business interests of service providers and to prevent information leakage, the implementation details of services are usually invisible to service consumers. This makes it challenging to determine the test coverage of a service composition as a whole and design test cases effectively. To address this problem, we propose an approach to whiten the testing of service compositions based on events exposed by services. By deriving event interfaces to explore only necessary test coverage information from service implementations, our approach allows service consumers to determine test coverage based on selected events exposed by services at runtime without releasing the service implementation details. We also develop an approach to design test cases effectively based on event interfaces concerning both effectiveness and information leakage. The experimental results show that our approach outperforms existing testing approaches for service compositions with up to 49 percent more test coverage and an up to 24 percent higher fault-detection rate. Moreover, our solution can trade off effectiveness, efficiency, and information leakage for test case generation.",
"The attractive feature of Service Oriented Architecture (SOA) is that pieces of software conceived and developed by independent organizations can be dynamically composed to provide richer functionality. The same reasons that enable flexible compositions, however, also prevent the application of some traditional testing approaches, making SOA validation challenging and costly. Web services usually expose just an interface, enough to invoke them and develop some general (black-box) tests, but insufficient for a tester to develop an adequate understanding of the integration quality between the application and the independent web services. To address this lack we propose an approach that makes web services more transparent to testers through the addition of an intermediary service that provides coverage information. The approach, named Service Oriented Coverage Testing (SOCT), provides testers with feedback about how much a service is exercised by their tests without revealing the service internals. In SOCT, testing feedback is offered itself as a service, thus preserving SOA founding principles of loose coupling and implementation neutrality. In this paper we motivate and define the SOCT approach, and implement an instance of it. We also perform a study to asses SOCT feasibility and provide a preliminary evaluation of its viability and value."
]
} |
1901.01538 | 2743442682 | Nowadays, web services play a major role in the development of enterprise applications. Many such applications are now developed using a service-oriented architecture (SOA), where microservices is one of its most popular kind. A RESTful web service will provide data via an API over the network using HTTP, possibly interacting with databases and other web services. Testing a RESTful API poses challenges, as inputs outputs are sequences of HTTP requests responses to a remote server. Many approaches in the literature do black-box testing, as the tested API is a remote service whose code is not available. In this paper, we consider testing from the point of view of the developers, which do have full access to the code that they are writing. Therefore, we propose a fully automated white-box testing approach, where test cases are automatically generated using an evolutionary algorithm. Tests are rewarded based on code coverage and fault finding metrics. We implemented our technique in a tool called EVOMASTER, which is open-source. Experiments on two open-source, yet non-trivial RESTful services and an industrial one, do show that our novel technique did automatically find 38 real bugs in those applications. However, obtained code coverage is lower than the one achieved by the manually written test suites already existing in those services. Research directions on how to further improve such approach are therefore discussed. | Regarding RESTful web services, Chakrabarti and Kumar @cite_11 provided a testing framework in which automatic generation test cases corresponding to an exhaustive list of all valid combinations of query parameter values''. @cite_32 proposed a technique to generate tests for RESTful API based on an idealised, property-based test model. Chakrabarti and Rodriquez @cite_23 defined a technique to formalize the connectedness'' of a RESTful service, and generate tests based on such model. When formal models are available, techniques like in @cite_22 and in @cite_15 can be used as well. Our technique is significantly different from those approaches, as it does not need the presence of any formal model, can automatically exploit white-box information, and uses an evolutionary algorithm to guide the generation of effective tests. | {
"cite_N": [
"@cite_22",
"@cite_32",
"@cite_23",
"@cite_15",
"@cite_11"
],
"mid": [
"",
"2072893131",
"2058819392",
"2250931952",
"2160041234"
],
"abstract": [
"",
"Developing APIs as Web Services over HTTP implies adding an extra layer to software, compared to the ones that we would need to develop an API distributed as, for example, a library. This additional layer must be included in testing too, but this implies that the software under test has an additional complexity due both to the need to use an intermediate protocol in tests and to the need to test compliance with the constraints imposed by that protocol: in this case the constraints defined by the REST architectural style. On the other hand, these requirements are common to all the Web Services, and because of that, we should be able to abstract this aspect of the testing model so that we can reuse it in testing any Web Service. In this paper, as a first step towards automating the testing of Web Services over HTTP, we describe a practical mechanism and model for testing RESTful Web Services without side effects and give an example of how we successfully adapted that mechanism to test two different existing Web Services: Storage Room by Thriventures and Google Tasks by Google. For this task we have used Erlang together with state machine models in the property-based testing tool Quviq QuickCheck, implemented using the statem module.",
"In the context of RESTful web-services, connectedness refers to the property wherein every resource in the web-service is reachable from the base resource by successive HTTP GET requests. Presence (or absence) of connectedness has practical implications and hence is an important property of RESTful web-services. In this article, we present an algorithm for testing the connectedness of RESTful webservices. Using a formal specification of the web-service, our algorithm tests the connectedness of a web-service automatically. We also discuss a formal notation we have developed to specify RESTful web-services. Using our notation, we wrote the formal specification for a prototype RESTful web-service which has been developed for internal use in our organisation. Using this specification we employed our method to conduct automated testing of the above webservice. Many functional defects apart from those related to connectedness were detected early during development. This demonstrated that both our specification notation and our testing method are promising innovations in the direction of specification and testing of RESTful web-services.",
"In contrast to the increasing popularity of REpresentational State Transfer (REST), systematic testing of RESTful Application Programming Interfaces (API) has not attracted much attention so far. This paper describes different aspects of automated testing of RESTful APIs. Later, we focus on functional and security tests, for which we apply a technique called model-based software development. Based on an abstract model of the RESTful API that comprises resources, states and transitions a software generator not only creates the source code of the RESTful API but also creates a large number of test cases that can be immediately used to test the implementation. This paper describes the process of developing a software generator for test cases using state-of-the-art tools and provides an example to show the feasibility of our approach.",
"Abstract—Representational state transfer (REST) is an architecturalstyle that has received significant attention fromsoftware engineers for implementing web-services due toits simplicity and scalability. By definition, web-services aredistributed, headless (lacking UI) and loosely coupled. Thispresents the implementers and testers of web-services withchallenges, which are different from those in testing of traditionalsoftware. REST has unique characteristics like uniforminterfaces, stateless communication, caching, etc. This also necessitatestaking a fresh look at web-service testing specificallyin the context of RESTful web-services. A large informaticinfrastructure being developed within our organisation islargely based on service-oriented concepts wherein many of theservices are RESTful. As a part of a research project namedTest-the-REST (TTR), we have developed an approach fortesting RESTful web-services of the above infrastructure. Theproject yielded in a number of novel technical innovations, e.g.a scalable plugin based architecture, an extensible XML basedtest specification format, a method for reusing and composingtest cases for use-case testing etc. A prototype of TTR was usedin testing a RESTful service of the above infrastructure early inthe construction phase. Many bugs were uncovered resulting insignificant value add. In this paper, we present our experienceand insights in developing and using Test-the-REST."
]
} |
1901.01538 | 2743442682 | Nowadays, web services play a major role in the development of enterprise applications. Many such applications are now developed using a service-oriented architecture (SOA), where microservices is one of its most popular kind. A RESTful web service will provide data via an API over the network using HTTP, possibly interacting with databases and other web services. Testing a RESTful API poses challenges, as inputs outputs are sequences of HTTP requests responses to a remote server. Many approaches in the literature do black-box testing, as the tested API is a remote service whose code is not available. In this paper, we consider testing from the point of view of the developers, which do have full access to the code that they are writing. Therefore, we propose a fully automated white-box testing approach, where test cases are automatically generated using an evolutionary algorithm. Tests are rewarded based on code coverage and fault finding metrics. We implemented our technique in a tool called EVOMASTER, which is open-source. Experiments on two open-source, yet non-trivial RESTful services and an industrial one, do show that our novel technique did automatically find 38 real bugs in those applications. However, obtained code coverage is lower than the one achieved by the manually written test suites already existing in those services. Research directions on how to further improve such approach are therefore discussed. | Regarding the usage of evolutionary techniques for testing web services, Di @cite_35 proposed an approach for testing Service Level Agreements (SLA). Given an API in which a contract is formally defined stating for example, that the service provider guarantees to the service consumer a response time less than 30 ms and a resolution greater or equal to 300 dpi'', an evolutionary algorithm is used to generate tests to break those SLAs. The fitness function is based on how far a test is from breaking the tested SLA, which can be measured after its execution. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2120995957"
],
"abstract": [
"The diffusion of service oriented architectures introduces the need for novel testing approaches. On the one side, testing must be able to identify failures in the functionality provided by service. On the other side, it needs to identify cases in which the Service Level Agreement (SLA) negotiated between the service provider and the service consumer is not met. This would allow the developer to improve service performances, where needed, and the provider to avoid promising Quality of Service (QoS) levels that cannot be guaranteed. This paper proposes the use of Genetic Algorithms to generate inputs and configurations for service-oriented systems that cause SLA violations. The approach has been implemented in a tool and applied to an audio processing workflow and to a service for chart generation. In both cases, the approach was able to produce test data able to violate some QoS constraints."
]
} |
1901.01628 | 2908399268 | Byte-addressable persistent memories (PM) has finally made their way into production. An important and pressing problem that follows is how to deploy them in existing datacenters. One viable approach is to attach PM as self-contained devices to the network as disaggregated persistent memory, or DPM. DPM requires no changes to existing servers in datacenters; without the need to include a processor, DPM devices are cheap to build; and by sharing DPM across compute servers, they offer great elasticity and efficient resource packing. This paper explores different ways to organize DPM and to build data stores with DPM. Specifically, we propose three architectures of DPM: 1) compute nodes directly access DPM (DPM-Direct); 2) compute nodes send requests to a coordinator server, which then accesses DPM to complete a request (DPM-Central); and 3) compute nodes directly access DPM for data operations and communicate with a global metadata server for the control plane (DPM-Sep). Based on these architectures, we built three atomic, crash-consistent data stores. We evaluated their performance, scalability, and CPU cost with micro-benchmarks and YCSB. Our evaluation results show that DPM-Direct has great small-size read but poor write performance; DPM-Central has the best write performance when the scale of the cluster is small but performs poorly when the scale increases; and DPM-Sep performs well overall. | NAM-DB @cite_23 @cite_17 is a -based database system that uses one-sided communication for both read and write. Infiniswap @cite_66 is an RDMA-based remote memory paging system. Remote regions @cite_28 is a system that exposes remote memory as files that other host servers can access (through a file system interface). Although these three systems do not use two-way communication for data path, they both rely on processing power at remote nodes to run data management tasks. runs all management tasks (control path) at , a separate node from remote memory. | {
"cite_N": [
"@cite_28",
"@cite_66",
"@cite_23",
"@cite_17"
],
"mid": [
"2886216368",
"2604701668",
"2964109949",
"1497100682"
],
"abstract": [
"We propose an intuitive abstraction for a process to export its memory to remote hosts, and to access the memory exported by others. This abstraction provides a simpler interface to RDMA and other remote memory technologies compared to the existing verbs interface. The key idea is that a process can export parts of its memory as files, called remote regions, that can be accessed through the usual file system operations (read, write, memory map, etc). We built this abstraction in the Linux kernel, and evaluated it. We show that remote regions are easy to use and perform close to RDMA. We demonstrate it via micro-benchmarks and by adapting two in-memory single-host applications to use remote memory: R and Metis. With R, using remote regions requires no changes to the code and allows R to run with remote memory that exceeds the physical memory of a host. With Metis, the modifications amount to ≈100 lines of code and they allow Metis to scale its performance across 8 hosts.",
"Memory-intensive applications suffer large performance loss when their working sets do not fully fit in memory. Yet, they cannot leverage otherwise unused remote memory when paging out to disks even in the presence of large imbalance in memory utilizations across a cluster. Existing proposals for memory disaggregation call for new architectures, new hardware designs, and or new programming models, making them infeasible. This paper describes the design and implementation of INFINISWAP, a remote memory paging system designed specifically for an RDMA network. INFINISWAP opportunistically harvests and transparently exposes unused memory to unmodified applications by dividing the swap space of each machine into many slabs and distributing them across many machines' remote memory. Because one-sided RDMA operations bypass remote CPUs, INFINISWAP leverages the power of many choices to perform decentralized slab placements and evictions. We have implemented and deployed INFINISWAP on an RDMA cluster without any modifications to user applications or the OS and evaluated its effectiveness using multiple workloads running on unmodified VoltDB, Memcached, PowerGraph, GraphX, and Apache Spark. Using INFINISWAP, throughputs of these applications improve between 4× (0:94×) to 15:4× (7:8×) over disk (Mellanox nbdX), and median and tail latencies between 5:4× (2×) and 61× (2:3×). INFINISWAP achieves these with negligible remote CPU usage, whereas nbdX becomes CPU-bound. INFINISWAP increases the overall memory utilization of a cluster and works well at scale.",
"The common wisdom is that distributed transactions do not scale. But what if distributed transactions could be made scalable using the next generation of networks and a redesign of distributed databases? There would no longer be a need for developers to worry about co-partitioning schemes to achieve decent performance. Application development would become easier as data placement would no longer determine how scalable an application is. Hardware provisioning would be simplified as the system administrator can expect a linear scale-out when adding more machines rather than some complex sub-linear function, which is highly application specific. In this paper, we present the design of our novel scalable database system NAM-DB and show that distributed transactions with the very common Snapshot Isolation guarantee can indeed scale using the next generation of RDMA-enabled network technology without any inherent bottlenecks. Our experiments with the TPC-C benchmark show that our system scales linearly to over 6.5 million new-order (14.5 million total) distributed transactions per second on 56 machines.",
"The next generation of high-performance networks with remote direct memory access (RDMA) capabilities requires a fundamental rethinking of the design of distributed in-memory DBMSs. These systems are commonly built under the assumption that the network is the primary bottleneck and should be avoided at all costs, but this assumption no longer holds. For instance, with InfiniBand FDR 4×, the bandwidth available to transfer data across the network is in the same ballpark as the bandwidth of one memory channel. Moreover, RDMA transfer latencies continue to rapidly improve as well. In this paper, we first argue that traditional distributed DBMS architectures cannot take full advantage of high-performance networks and suggest a new architecture to address this problem. Then, we discuss initial results from a prototype implementation of our proposed architecture for OLTP and OLAP, showing remarkable performance improvements over existing designs."
]
} |
1901.01628 | 2908399268 | Byte-addressable persistent memories (PM) has finally made their way into production. An important and pressing problem that follows is how to deploy them in existing datacenters. One viable approach is to attach PM as self-contained devices to the network as disaggregated persistent memory, or DPM. DPM requires no changes to existing servers in datacenters; without the need to include a processor, DPM devices are cheap to build; and by sharing DPM across compute servers, they offer great elasticity and efficient resource packing. This paper explores different ways to organize DPM and to build data stores with DPM. Specifically, we propose three architectures of DPM: 1) compute nodes directly access DPM (DPM-Direct); 2) compute nodes send requests to a coordinator server, which then accesses DPM to complete a request (DPM-Central); and 3) compute nodes directly access DPM for data operations and communicate with a global metadata server for the control plane (DPM-Sep). Based on these architectures, we built three atomic, crash-consistent data stores. We evaluated their performance, scalability, and CPU cost with micro-benchmarks and YCSB. Our evaluation results show that DPM-Direct has great small-size read but poor write performance; DPM-Central has the best write performance when the scale of the cluster is small but performs poorly when the scale increases; and DPM-Sep performs well overall. | Mojim @cite_19 , Hotpot @cite_49 , and Octopus @cite_67 are three recent distributed systems. Mojim @cite_19 is the first system that targets using in distributed, datacenter environments. Mojim provides an efficient, RDMA-based, asynchronous replication mechanism for , to make it more reliable and available. Hotpot @cite_49 is the first distributed shared persistent memory system. It integrates the idea of distributed shared memory and distributed storage systems to provide a globally coherent, crash-consistent, and reliable distributed system that applications can access with memory instructions. Octopus @cite_67 is a distributed file system built on top of . None of these systems build on the model, which presents a whole new set of challenges. | {
"cite_N": [
"@cite_19",
"@cite_67",
"@cite_49"
],
"mid": [
"2090204040",
"2740453188",
"2758102451"
],
"abstract": [
"Next-generation non-volatile memories (NVMs) promise DRAM-like performance, persistence, and high density. They can attach directly to processors to form non-volatile main memory (NVMM) and offer the opportunity to build very low-latency storage systems. These high-performance storage systems would be especially useful in large-scale data center environments where reliability and availability are critical. However, providing reliability and availability to NVMM is challenging, since the latency of data replication can overwhelm the low latency that NVMM should provide. We propose Mojim, a system that provides the reliability and availability that large-scale storage systems require, while preserving the performance of NVMM. Mojim achieves these goals by using a two-tier architecture in which the primary tier contains a mirrored pair of nodes and the secondary tier contains one or more secondary backup nodes with weakly consistent copies of data. Mojim uses highly-optimized replication protocols, software, and networking stacks to minimize replication costs and expose as much of NVMM?s performance as possible. We evaluate Mojim using raw DRAM as a proxy for NVMM and using an industrial NVMM emulation system. We find that Mojim provides replicated NVMM with similar or even better performance than un-replicated NVMM (reducing latency by 27 to 63 and delivering between 0.4 to 2.7X the throughput). We demonstrate that replacing MongoDB's built-in replication system with Mojim improves MongoDB's performance by 3.4 to 4X.",
"",
"Next-generation non-volatile memories (NVMs) will provide byte addressability, persistence, high density, and DRAM-like performance. They have the potential to benefit many datacenter applications. However, most previous research on NVMs has focused on using them in a single machine environment. It is still unclear how to best utilize them in distributed, datacenter environments. We introduce Distributed Shared Persistent Memory (DSPM), a new framework for using persistent memories in distributed data-center environments. DSPM provides a new abstraction that allows applications to both perform traditional memory load and store instructions and to name, share, and persist their data. We built Hotpot, a kernel-level DSPM system that provides low-latency, transparent memory accesses, data persistence, data reliability, and high availability. The key ideas of Hotpot are to integrate distributed memory caching and data replication techniques and to exploit application hints. We implemented Hotpot in the Linux kernel and demonstrated its benefits by building a distributed graph engine on Hotpot and porting a NoSQL database to Hotpot. Our evaluation shows that Hotpot outperforms a recent distributed shared memory system by 1.3× to 3.2× and a recent distributed PM-based file system by 1.5× to 3.0×."
]
} |
1901.01628 | 2908399268 | Byte-addressable persistent memories (PM) has finally made their way into production. An important and pressing problem that follows is how to deploy them in existing datacenters. One viable approach is to attach PM as self-contained devices to the network as disaggregated persistent memory, or DPM. DPM requires no changes to existing servers in datacenters; without the need to include a processor, DPM devices are cheap to build; and by sharing DPM across compute servers, they offer great elasticity and efficient resource packing. This paper explores different ways to organize DPM and to build data stores with DPM. Specifically, we propose three architectures of DPM: 1) compute nodes directly access DPM (DPM-Direct); 2) compute nodes send requests to a coordinator server, which then accesses DPM to complete a request (DPM-Central); and 3) compute nodes directly access DPM for data operations and communicate with a global metadata server for the control plane (DPM-Sep). Based on these architectures, we built three atomic, crash-consistent data stores. We evaluated their performance, scalability, and CPU cost with micro-benchmarks and YCSB. Our evaluation results show that DPM-Direct has great small-size read but poor write performance; DPM-Central has the best write performance when the scale of the cluster is small but performs poorly when the scale increases; and DPM-Sep performs well overall. | ReFlex @cite_70 is a software-based system builds on IX @cite_15 and exposes a logical block interface for users to access remote Flash with nearly identical performance as accessing local Flash. RAMCloud @cite_74 is a remote key-value storage system that stores a full copy of all data in DRAM and backups in disks or SSDs. Kamino-Tx @cite_75 proposes a new mechanism to perform transactional updates on without any copying of data in the critical path. These systems all rely on local computation power at remote memory storage servers to perform various online and recovery management services which differs from model. | {
"cite_N": [
"@cite_70",
"@cite_15",
"@cite_75",
"@cite_74"
],
"mid": [
"2604759068",
"2169414316",
"2606766398",
"2074881976"
],
"abstract": [
"Remote access to NVMe Flash enables flexible scaling and high utilization of Flash capacity and IOPS within a datacenter. However, existing systems for remote Flash access either introduce significant performance overheads or fail to isolate the multiple remote clients sharing each Flash device. We present ReFlex, a software-based system for remote Flash access, that provides nearly identical performance to accessing local Flash. ReFlex uses a dataplane kernel to closely integrate networking and storage processing to achieve low latency and high throughput at low resource requirements. Specifically, ReFlex can serve up to 850K IOPS per core over TCP IP networking, while adding 21us over direct access to local Flash. ReFlex uses a QoS scheduler that can enforce tail latency and throughput service-level objectives (SLOs) for thousands of remote clients. We show that ReFlex allows applications to use remote Flash while maintaining their original performance with local Flash.",
"The conventional wisdom is that aggressive networking requirements, such as high packet rates for small messages and microsecond-scale tail latency, are best addressed outside the kernel, in a user-level networking stack. We present IX, a dataplane operating system that provides high I O performance, while maintaining the key advantage of strong protection offered by existing kernels. IX uses hardware virtualization to separate management and scheduling functions of the kernel (control plane) from network processing (dataplane). The data-plane architecture builds upon a native, zero-copy API and optimizes for both bandwidth and latency by dedicating hardware threads and networking queues to data-plane instances, processing bounded batches of packets to completion, and by eliminating coherence traffic and multi-core synchronization. We demonstrate that IX outperforms Linux and state-of-the-art, user-space network stacks significantly in both throughput and end-to-end latency. Moreover, IX improves the throughput of a widely deployed, key-value store by up to 3.6x and reduces tail latency by more than 2x.",
"Data structures for non-volatile memories have to be designed such that they can be atomically modified using transactions. Existing atomicity methods require data to be copied in the critical path which significantly increases the latency of transactions. These overheads are further amplified for transactions on byte-addressable persistent memories where often the byte ranges modified for data structure updates are significantly smaller compared to the granularity at which data can be efficiently copied and logged. We propose Kamino-Tx that provides a new way to perform transactional updates on non-volatile byte-addressable memories (NVM) without requiring any copying of data in the critical path. Kamino-Tx maintains an additional copy of data off the critical path to achieve atomicity. But in doing so Kamino-Tx has to overcome two important challenges of safety and minimizing NVM storage overhead. We propose a more dynamic approach to maintaining the additional copy of data to reduce storage overheads. To further mitigate the storage overhead of using Kamino-Tx in a replicated setting, we develop Kamino-Tx-Chain, a variant of Chain Replication where replicas perform in-place updates and do not maintain data copies locally; replicas in Kamino-Tx-Chain leverage other replicas as copies to roll back or forward for atomicity. Our results show that using Kamino-Tx increases throughput by up to 9.5x for unreplicated systems and up to 2.2x for replicated settings.",
"RAMCloud is a storage system that provides low-latency access to large-scale datasets. To achieve low latency, RAMCloud stores all data in DRAM at all times. To support large capacities (1PB or more), it aggregates the memories of thousands of servers into a single coherent key-value store. RAMCloud ensures the durability of DRAM-based data by keeping backup copies on secondary storage. It uses a uniform log-structured mechanism to manage both DRAM and secondary storage, which results in high performance and efficient memory usage. RAMCloud uses a polling-based approach to communication, bypassing the kernel to communicate directly with NICs; with this approach, client applications can read small objects from any RAMCloud storage server in less than 5μs, durable writes of small objects take about 13.5μs. RAMCloud does not keep multiple copies of data online; instead, it provides high availability by recovering from crashes very quickly (1 to 2 seconds). RAMCloud’s crash recovery mechanism harnesses the resources of the entire cluster working concurrently so that recovery performance scales with cluster size."
]
} |
1901.01649 | 2964182391 | Predicting the future is a fantasy but practicality work. It is the key component to intelligent agents, such as self-driving vehicles, medical monitoring devices and robotics. In this work, we consider generating unseen future frames from previous observations, which is notoriously hard due to the uncertainty in frame dynamics. While recent works based on generative adversarial networks (GANs) made remarkable progress, there is still an obstacle for making accurate and realistic predictions. In this paper, we propose a novel GAN based on inter-frame difference to circumvent the difficulties. More specifically, our model is a multi-stage generative network, which is named the Difference Guided Generative Adversarial Network (DGGAN). The DGGAN learns to explicitly enforce future-frame predictions that is guided by synthetic inter-frame difference. Given a sequence of frames, DGGAN first uses dual paths to generate meta information. One path, called Coarse Frame Generator, predicts the coarse details about future frames, and the other path, called Difference Guide Generator, generates the difference image which include complementary fine details. Then our coarse details will then be refined via guidance of difference image under the support of GANs. With this model and novel architecture, we achieve state-of-the-art performance for future video prediction on UCF-101, KITTI. | Our networks makes full use of the advantages of the above model and avoids their weakness as much as possible. Our approach exploit simple loss to make the machine learn the coarse predicted frame and Difference Guide(DG) information which contain the variation of foreground and background from the previous frames to the new frames. Then the fusion of DG and previous adjacent frames can help getting over the problem of blurry. In addition our ultimate prediction is refined via GANs. To achieve more stable and superior training results, we use WGAN-GP @cite_26 in combination with CGAN @cite_0 instead of original GANs. | {
"cite_N": [
"@cite_0",
"@cite_26"
],
"mid": [
"2125389028",
"2605135824"
],
"abstract": [
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms."
]
} |
1901.01488 | 2907012035 | Selectivity estimation remains a critical task in query optimization even after decades of research and industrial development. Optimizers rely on accurate selectivities when generating execution plans. They maintain a large range of statistical synopses for efficiently estimating selectivities. Nonetheless, small errors -- propagated exponentially -- can lead to severely sub-optimal plans---especially, for complex predicates. Database systems for modern computing architectures rely on extensive in-memory processing supported by massive multithread parallelism and vectorized instructions. However, they maintain the same synopses approach to query optimization as traditional disk-based databases. We introduce a novel query optimization paradigm for in-memory and GPU-accelerated databases based on . The central idea in ESC is to compute selectivities exactly through queries during query optimization. In order to make the process efficient, we propose several optimizations targeting the selection and materialization of tables and predicates to which ESC is applied. We implement ESC in the MapD open-source database system. Experiments on the TPC-H and SSB benchmarks show that ESC records constant and less than 30 milliseconds overhead when running on GPU and generates improved query execution plans that are as much as 32X faster. | A large variety of synopses have been proposed for selectivity estimation. Many of them are also used in approximate query processing @cite_10 . Histograms are the most frequently implemented in practice. While they provide accurate estimation for a single attribute, they cannot handle correlations between attributes as the dimensionality increases @cite_12 . The same holds for sketches @cite_9 . Samples are sensitive to skewed and sparse data distributions @cite_5 . Although graphical models @cite_8 avoid all these problems, they are limited to conjunctive queries and perform best for equality predicates. Since ESC is exact, it achieves both accuracy and generality. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"2396635388",
"2058858139",
"",
"2022858489",
"1581547316"
],
"abstract": [
"As a result of decades of research and industrial development, modern query optimizers are complex software artifacts. However, the quality of the query plan chosen by an optimizer is largely determined by the quality of the underlying statistical summaries. Small selectivity estimation errors, propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all the attributes in the database into small, usually two-dimensional distributions. We describe several optimizations that can make selectivity estimation highly efficient, and we present a complete implementation inside PostgreSQL’s query optimizer. Experimental results indicate an order of magnitude better selectivity estimates, while keeping optimization time in the range of tens of milliseconds.",
"Sketching techniques can provide approximate answers to aggregate queries either for data-streaming or distributed computation. Small space summaries that have linearity properties are required for both types of applications. The prevalent method for analyzing sketches uses moment analysis and distribution independent bounds based on moments. This method produces clean, easy to interpret, theoretical bounds that are especially useful for deriving asymptotic results. However, the theoretical bounds obscure fine details of the behavior of various sketches and they are mostly not indicative of which type of sketches should be used in practice. Moreover, no significant empirical comparison between various sketching techniques has been published, which makes the choice even harder. In this paper, we take a close look at the sketching techniques proposed in the literature from a statistical point of view with the goal of determining properties that indicate the actual behavior and producing tighter confidence bounds. Interestingly, the statistical analysis reveals that two of the techniques, Fast-AGMS and Count-Min, provide results that are in some cases orders of magnitude better than the corresponding theoretical predictions. We conduct an extensive empirical study that compares the different sketching techniques in order to corroborate the statistical analysis with the conclusions we draw from it. The study indicates the expected performance of various sketches, which is crucial if the techniques are to be used by practitioners. The overall conclusion of the study is that Fast-AGMS sketches are, for the full spectrum of problems, either the best, or close to the best, sketching technique. This makes Fast-AGMS sketches the preferred choice irrespective of the situation.",
"",
"Methods for Approximate Query Processing (AQP) are essential for dealing with massive data. They are often the only means of providing interactive response times when exploring massive datasets, and are also needed to handle high speed data streams. These methods proceed by computing a lossy, compact synopsis of the data, and then executing the query of interest against the synopsis rather than the entire dataset. We describe basic principles and recent developments in AQP. We focus on four key synopses: random samples, histograms, wavelets, and sketches. We consider issues such as accuracy, space and time efficiency, optimality, practicality, range of applicability, error bounds on query answers, and incremental maintenance. We also discuss the trade-offs between the different synopsis types.",
"The result size of a query that involves multiple attributes from the same relation depends on these attributes’joinr data distribution, i.e., the frequencies of all combinations of attribute values. To simplify the estimation of that size, most commercial systems make the artribute value independenceassumption and maintain statistics (typically histograms) on individual attributes only. In reality, this assumption is almost always wrong and the resulting estimations tend to be highly inaccurate. In this paper, we propose two main alternatives to effectively approximate (multi-dimensional) joint data distributions. (a) Using a multi-dimensional histogram, (b) Using the Singular Value Decomposition (SVD) technique from linear algebra. An extensive set of experiments demonstrates the advantages and disadvantages of the two approaches and the benefits of both compared to the independence assumption."
]
} |
1901.01488 | 2907012035 | Selectivity estimation remains a critical task in query optimization even after decades of research and industrial development. Optimizers rely on accurate selectivities when generating execution plans. They maintain a large range of statistical synopses for efficiently estimating selectivities. Nonetheless, small errors -- propagated exponentially -- can lead to severely sub-optimal plans---especially, for complex predicates. Database systems for modern computing architectures rely on extensive in-memory processing supported by massive multithread parallelism and vectorized instructions. However, they maintain the same synopses approach to query optimization as traditional disk-based databases. We introduce a novel query optimization paradigm for in-memory and GPU-accelerated databases based on . The central idea in ESC is to compute selectivities exactly through queries during query optimization. In order to make the process efficient, we propose several optimizations targeting the selection and materialization of tables and predicates to which ESC is applied. We implement ESC in the MapD open-source database system. Experiments on the TPC-H and SSB benchmarks show that ESC records constant and less than 30 milliseconds overhead when running on GPU and generates improved query execution plans that are as much as 32X faster. | The LEO optimizer @cite_6 monitors relational operator selectivities during query execution. These exact values are continuously compared with the synopses-based estimates used during optimization. When the two have a large deviation, synopses are adjusted to automatically correct their error for future queries. Mid-query re-optimization @cite_14 takes immediate action and updates the execution plan for the running query. This, however, requires dropping already computed partial results which introduces considerable overhead. ESC computes the exact cardinalities at optimization and applies them to the current query which has the best plan from the beginning. | {
"cite_N": [
"@cite_14",
"@cite_6"
],
"mid": [
"1975731516",
"2131413649"
],
"abstract": [
"For a number of reasons, even the best query optimizers can very often produce sub-optimal query execution plans, leading to a significant degradation of performance. This is especially true in databases used for complex decision support queries and or object-relational databases. In this paper, we describe an algorithm that detects sub-optimality of a query execution plan during query execution and attempts to correct the problem. The basic idea is to collect statistics at key points during the execution of a complex query. These statistics are then used to optimize the execution of the query, either by improving the resource allocation for that query, or by changing the execution plan for the remainder of the query. To ensure that this does not significantly slow down the normal execution of a query, the Query Optimizer carefully chooses what statistics to collect, when to collect them, and the circumstances under which to re-optimize the query. We describe an implementation of this algorithm in the Paradise Database System, and we report on performance studies, which indicate that this can result in significant improvements in the performance of complex queries.",
"Structured Query Language (SQL) has emerged as an industry standard for querying relational database management systems, largely because a user need only specify what data are wanted, not the details of how to access those data. A query optimizer uses a mathematical model of query execution to determine automatically the best way to access and process any given SQL query. This model is heavily dependent upon the optimizer's estimates for the number of rows that will result at each step of the query execution plan (QEP), especially for complex queries involving many predicates and or operations. These estimates rely upon statistics on the database and modeling assumptions that may or may not be true for a given database. In this paper, we discuss an autonomic query optimizer that automatically self-validates its model without requiring any user interaction to repair incorrect statistics or cardinality estimates. By monitoring queries as they execute, the autonomic optimizer compares the optimizer's estimates with actual cardinalities at each step in a QEP, and computes adjustments to its estimates that may be used during future optimizations of similar queries. Moreover, the detection of estimation errors can also trigger reoptimization of a query in mid-execution. The autonomic refinement of the optimizer's model can result in a reduction of query execution time by orders of magnitude at negligible additional run-time cost. We discuss various research issues and practical considerations that were addressed during our implementation of a first prototype of LEO, a LEarning Optimizer for DB2® (Database 2TM) that learns table access cardinalities and for future queries corrects the estimation error for simple predicates by adjusting the database statistics of DB2."
]
} |
1901.01535 | 2951394060 | In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches. | 3D reconstruction methods can be roughly categorized into model-based and learning-based approaches, which learn the task from data. As a thorough survey on 3D reconstruction techniques is beyond the scope of this paper, we discuss only the most related approaches and refer to @cite_45 @cite_13 @cite_0 for a more thorough review. | {
"cite_N": [
"@cite_0",
"@cite_45",
"@cite_13"
],
"mid": [
"",
"2033819227",
"2536043048"
],
"abstract": [
"",
"From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.",
"This paper proposes a fully automated 3D reconstruction and visualization system for architectural scenes (interiors and exteriors). The reconstruction of indoor environments from photographs is particularly challenging due to texture-poor planar surfaces such as uniformly-painted walls. Our system first uses structure-from-motion, multi-view stereo, and a stereo algorithm specifically designed for Manhattan-world scenes (scenes consisting predominantly of piece-wise planar surfaces with dominant directions) to calibrate the cameras and to recover initial 3D geometry in the form of oriented points and depth maps. Next, the initial geometry is fused into a 3D model with a novel depth-map integration algorithm that, again, makes use of Manhattan-world assumptions and produces simplified 3D models. Finally, the system enables the exploration of reconstructed environments with an interactive, image-based 3D viewer. We demonstrate results on several challenging datasets, including a 3D reconstruction and image-based walk-through of an entire floor of a house, the first result of this kind from an automated computer vision system."
]
} |
1901.01535 | 2951394060 | In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches. | Learning-based 3D Reconstruction The development of large 3D shape databases @cite_41 @cite_10 has fostered the development of learning based solutions @cite_16 @cite_42 @cite_30 @cite_12 to 3D reconstruction. Choy al @cite_16 propose a unified framework for single and multi-view reconstruction by using a 3D recurrent neural network (3D-R2N2) based on long-short-term memory (LSTM). Girdhar al @cite_30 propose a network that embeds image and shape together for single view 3D volumetric shape generation. Hierarchical space partitioning structures ( , octrees) have been proposed to increase the output resolution beyond @math voxels @cite_21 @cite_39 @cite_5 . | {
"cite_N": [
"@cite_30",
"@cite_41",
"@cite_42",
"@cite_21",
"@cite_39",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"2964137676",
"2190691619",
"2546066744",
"2949130266",
"2606840594",
"2603429625",
"2342277278",
"1920022804",
"2734656015"
],
"abstract": [
"What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.",
"We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans.",
"We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.",
"In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.",
"Recently, Convolutional Neural Networks have shown promising results for 3D geometry prediction. They can make predictions from very little input data such as a single color image. A major limitation of such approaches is that they only predict a coarse resolution voxel grid, which does not capture the surface of the objects well. We propose a general framework, called hierarchical surface prediction (HSP), which facilitates prediction of high resolution voxel grids. The main insight is that it is sufficient to predict high resolution voxels around the predicted surfaces. The exterior and interior of the objects can be represented with coarse resolution voxels. Our approach is not dependent on a specific input type. We show results for geometry prediction from color images, depth images and shape completion from partial voxel grids. Our analysis shows that our high resolution predictions are more accurate than low resolution predictions.",
"We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image.",
"Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM SLAM methods fail (because of lack of texture and or wide baseline).",
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.",
""
]
} |
1901.01535 | 2951394060 | In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches. | As most of the aforementioned methods solve the 3D reconstruction problem via recognizing the scene content, they are only applicable to object reconstruction and do not generalize well to novel object categories or full scenes. Towards a more general learning based model for 3D reconstruction, @cite_32 @cite_6 propose to unproject the input images into 3D voxel space and process the concatenated unprojected volumes using a 3D convolutional neural network. While these approaches take projective geometry into account, they do not explicitly exploit occlusion relationships across viewpoints, as proposed in this paper. Instead, they rely on a generic 3D CNN to learn this mapping from data. We compare to @cite_6 in our experimental evaluation and obtain more accurate 3D reconstructions and significantly better runtime performance. In addition, the lightweight nature of our model's forward inference allows for reconstructing scenes up to @math voxels resolution in a single pass. In contrast, @cite_32 is limited to @math voxels and @cite_6 requires processing large volumes using a sliding window, thereby losing global spatial relationships. | {
"cite_N": [
"@cite_32",
"@cite_6"
],
"mid": [
"2963966978",
"2964243776"
],
"abstract": [
"We present a learnt system for multi-view stereopsis. In contrast to recent learning based methods for 3D reconstruction, we leverage the underlying 3D geometry of the problem through feature projection and unprojection along viewing rays. By formulating these operations in a differentiable manner, we are able to learn the system end-to-end for the task of metric 3D reconstruction. End-to-end learning allows us to jointly reason about shape priors while conforming to geometric constraints, enabling reconstruction from much fewer images (even a single image) than required by classical approaches as well as completion of unseen surfaces. We thoroughly evaluate our approach on the ShapeNet dataset and demonstrate the benefits over classical approaches and recent learning based methods.",
"This paper proposes an end-to-end learning framework for multiview stereopsis. We term the network SurfaceNet. It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model. The key advantage of the framework is that both photo-consistency as well geometric relations of the surface structure can be directly learned for the purpose of multiview stereopsis in an end-to-end fashion. SurfaceNet is a fully 3D convolutional network which is achieved by encoding the camera parameters together with the images in a 3D voxel representation. We evaluate SurfaceNet on the large-scale DTU benchmark."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.